Jump to content

Pro Tools 10


UstadKhanAli

Recommended Posts

  • CMS Author

A lot of expressiveness with traditional guitar comes from varying its volume control as it feeds into an amp. There are multiple elements at play . . . Amp sims also respond to changes in input level, and it's cool be able to automate these changes - e.g., a rising level as you build up a chord so the drive gets more and more intense. But, you can't do that with traditional DAW automation, because it comes
after
the amp sim,


The workarounds are to do your level variations in real time, but then the guitar's volume control is interacting with an interface's hi-Z input, not an amp input, which might not be what you want. The other is to automate the sim's drive control (if present). But with clip automation being before the processing, you can alter the drive going in, and with standard automation, control the output.

 

I guess my DAW inexperience and expectations are showing here. While I've never played around with it using the programs I have, I just expected that you could make or insert a level change anyplace in the chain you wanted it (or put a plug-in anywhere you want it in the chain). Maybe this really is a revolutionary (or is that revoltin'?) development.

 

But not to be an ol' sourpuss here, but isn't dynamically changing the level going into the amplifier to control the sound something that the guitarist does when playing? B.B.King is one of the most expressive guitarists around and I don't think I've ever seen him touch the volume control on his guitar when he's playing. I'll accept that by being able to adjust, in effect, the volume of the input and output of the amplifier (or gain of the channel of the mic on the amplifier) after the part has been played allows you to change more things after you hear them, but why not just play it right to begin with and then you're done?

 

As to narration, again, clip level is pre-processing so if you're using compression, then you can make sure that loud sections don't get overly compressed, and bring up levels that need to exceed the compression threshold...while using automation on the overall track.

 

Certainly that's a good approach. It's what the Vocal Rider plug-in does. But again, if it was recorded right, or if we weren't obsessed with technical perfection in our productions, you wouldn't have to fuss with it.

 

But this really has nothing to do with Pro Tools other than what Pro Tools has allowed us to do, so we now have to do it. ;)

Link to comment
Share on other sites

  • Replies 147
  • Created
  • Last Reply
  • Members

The problem is the guitar is usually not going into an amplifier, but into a direct box or instrument input on an interface...so you don't get the same kind of interaction, although you can at least change the level going into the sim. But, remember that sometimes I use sims, and sometimes not. If I'm using a sim, it's to get sounds or effects I can't get otherwise.


For example, I'm really into using multiband distortion, which is a "creature of the DAW" as this is cumbersome to do live. Typically, there will be four bands. The guitar part that gets recorded into the track is dry, then each track is copied four times, and each is put through its own band and distortion.


You don't necessarily want the same amount of drive for each band (e.g. having the same amount in the highs can often sound shrill). Also, it's great to be able to vary the drive to change dynamics, and you can do this in the different bands which is wonderful...for example if it's an instrumental part, you can crank the mids more, but not if there's a vocal going on.


So you can't really "play it right" it real time unless you had four hands and four volume controls. With this kind of approach, I can concentrate on getting the part right, and worry about the fine points later. To me, this isn't the same thing as, say, punching, where you conduct a "frankenstein" part out of various components. It's just an extension of the mixing process.

 

Link to comment
Share on other sites

  • Moderators
Well, here's the deal. A lot of expressiveness with traditional guitar comes from varying its volume control as it feeds into an amp. There are multiple elements at play - primarily the amount of signal going to the amp (therefore determining the amount of clipping), but also lower loading effects from the amp as you turn the level down because there's more resistance between the guitar and amp, and a change in loading characteristics on the pickups. (FWIW Eleven Rack's True-Z input attempts to translate these characteristics to the amp sim part of the unit.)


Amp sims also respond to changes in input level, and it's cool be able to automate these changes - e.g., a rising level as you build up a chord so the drive gets more and more intense. But, you can't do that with traditional DAW automation, because it comes
after
the amp sim, not before, so you can only alter the output, not the drive.


The workarounds are to do your level variations in real time, but then the guitar's volume control is interacting with an interface's hi-Z input, not an amp input, which might not be what you want. The other is to automate the sim's drive control (if present). But with clip automation being before the processing, you can alter the drive going in, and with standard automation, control the output.


I realize that doesn't seem very rock and roll, but I use pre-processing clip gain all the time in Sonar to reduce the amount of drive when the guitar's playing rhythm, and bringing it up a bit in strategic places to add emphasis. The end result is that the amp sim sounds a lot more alive and expressive than having a "set and forget" drive level, because there's a timbral change, not just a level one.


As to narration, again, clip level is pre-processing so if you're using compression, then you can make sure that loud sections don't get overly compressed, and bring up levels that need to exceed the compression threshold...while using automation on the overall track. The result is a much more even sound, without having to use as much compression as you would if you wanted to make sure you "caught" all the words.



:)

I automate the output buss fader to Reamp. Then into amp. Could do the same with sim. But automating the send fader into an amp is awesome. You can either do what a player could only do, reach for the adjustment between picking strokes, or do the impossible and have the volume change however you like. But stuff as simple as turning down the input gain on the amp via that reamp guitar single on the verses, then cranking it going into the choruses, man... it's old school new/school goodness and fun.

Link to comment
Share on other sites

  • CMS Author

The problem is the guitar is usually not going into an amplifier, but into a direct box or instrument input on an interface...so you don't get the same kind of interaction

 

It's sad, really sad, that this has become "usually." How will people ever learn to play the electric guitar? But then, I suppose there are a lot of people who don't know how to play a piano because all they've ever played is an electronic keyboard.

 

For example, I'm really into using multiband distortion, which is a "creature of the DAW" as this is cumbersome to do live. Typically, there will be four bands. The guitar part that gets recorded into the track is dry, then each track is copied four times, and each is put through its own band and distortion.

 

I suppose you could do that on stage with four amplifiers, in fact some probably have, perhaps not realizing that they were combining different types of distortion. But you're describing things that YOU do. You don't expect everybody to do that, do you? Otherwise you'd have to dream up some more tricks. ;)

 

So you can't really "play it right" it real time unless you had four hands and four volume controls. With this kind of approach, I can concentrate on getting the part right, and worry about the fine points later. To me, this isn't the same thing as, say, punching, where you conduct a "frankenstein" part out of various components. It's just an extension of the mixing process.

 

That's certainly one way of making music, but it's not my way. But then I don't expect everyone to do it my way either.

Link to comment
Share on other sites

  • Members

It's very sad that you feel that way Mike. I don't suppose that music production offers much to you these days. When did you stop wanting to keep up with music technology? I think you would find that making music in a computer opens up vast creative opportunities that your mind can explore and use to great artistic benefit. Sorry that you don't see things that way.

 

Steve

Link to comment
Share on other sites

  • CMS Author

 

It's very sad that you feel that way Mike. I don't suppose that music production offers much to you these days. When did you stop wanting to keep up with music technology?

 

 

Thanks for your sympathy, but I don't really need it.

 

My interest lies in playing music and preserving it for others to enjoy. To me, it's not about hearing sounds that nobody has ever heard before, it's about hearing melodies and words and instruments well played. Constructing music with a computer seems like a totally different thing to me and I just don't have the time or interest to become involved.

 

But what saddens me about is not that people make music this way, but that it's becoming the dominant way of creating what's still called "music." I'm sure that some people who make music in this way are perfectly capable of doing it the old fashioned way and probably enjoy that, too. I'm not talking about those people. I'm talking about the people who say "Hey, I'm making music" when they're really just telling a computer what to do. It's a skill, sure. But neither doing it nor listening to it gives me the same sort of enjoyment as listening to music played in real time on real instruments, even with a few mistakes. That's real to me.

 

Understand that I don't enjoy and don't listen to all types of music that fit my limited definition. I don't like opera, not one bit. I don't mind (and don't argue with) people who don't like bluegrass or old time string band music (what you call "hillbilly" if you don't like it), and there are some forms of jazz that I don't enjoy (and others that I do). I like orchestral music and it doesn't bother me that some of what I hear is played on synthesized instruments (but is mostly composed and played by people who know music, not just sounds and computers).

 

The music that I enjoy most doesn't lend itself to computer processing to make it interesting or palatable to a large audience. I prefer Bela Fleck when he just plays his banjo.

Link to comment
Share on other sites

  • Members

[from Craig] As to narration, again, clip level is pre-processing so if you're using compression, then you can make sure that loud sections don't get overly compressed, and bring up levels that need to exceed the compression threshold...while using automation on the overall track.

 

Certainly that's a good approach. It's what the Vocal Rider plug-in does. But again, if it was recorded right, or if we weren't obsessed with technical perfection in our productions, you wouldn't have to fuss with it. But this really has nothing to do with Pro Tools other than what Pro Tools has allowed us to do, so we now have to do it.
;)

 

But, Mike, we're talking about this feature because we can't always get it recorded right. And it has less to do with being "obsessed" than fixing a level mismatch that would otherwise be distracting.

 

For example, at the AES Show, I interviewed exhibitors using a wireless handheld mic, and my subjects (despite being audio professionals) had wildly different mic techniques. The guy from SSL held the mic at his solar plexus. The guy from Apogee held it against his bottom lip. Then I had people who would turn their head away from the mic (gesturing toward the gear they were demo'ing) resulting in yet a third problem (one that Vocal Rider and its ilk could handle). I couldn't adjust the level on the fly, so I fixed it in post. That's why the new clip-based gain (what you call "volume envelope," but it's really "volume envelope within a clip") feature in PT 10 is so welcome.

 

I don't have to tell you how valuable Vocal Rider is for my scenario above. PT 10 allows you to insert break points on the volume envelope's slope for even greater control, which might be your preferred approach (e.g., for situations requiring extreme precision or for adjustments too fast for an automated gain follower to make). "Clip-based gain" was a long time in coming, but as a PT user, it was music to my ears. And it was elegantly implemented.

 

---------

 

And I'm not just a PT fanboy. Pro Tools still lags behind, say, Cubase (my other DAW of choice) in matters of gain control. Consider my little narrative that illustrates the clunkiness of manipulating gain in Pro Tools in all versions prior to 10 [edit inspired by Lee Knight].

 

When I'm fussing with clips, adjusting their gain (or their envelopes--including fades) is what I'm doing about 90% of the time. With Cubase, you highlight an audio clip (region) and right-click to bring up the tools you need to make gain changes (as well as other appropriate tools).

 

But Pro Tools? No. Here's a typical scenario I come across in almost every session.

 

1. I view my audio and realize a passage needs to come up in volume. I highlight it.

1.jpg

 

 

2. I make it its own clip/region (Control-E).

2.jpg

 

 

3. Now I should be able to right-click to adjust the gain, right? Okay, I'll do that now ...

3.jpg

 

Hmmm. Nothing seems to apply. Where are the tools to adjust my audio clip's gain? Shouldn't these be a context-sensitive choice?

 

 

4. No? OK, so I'll go up to the AudioSuite menu selection and pull down "Gain."

4.jpg

 

Where's Gain? You mean, it's not a main category, like EQ and Dynamics? Isn't Gain a bit more fundamental than, say, Pitch Shift?

 

 

5. Oh, Gain is tucked under "Other." Another step. [sigh.] Since when is Gain considered an extraneous function that gets subsumed under a catch-all category? Hey, Avid! For the next release, put Gain immediately under Dynamics, and move Pitch Shift to the Other menu. You're welcome.

5.jpg

 

-----

 

So that rankles. And it's why I'm not a goolge-eyed fanboy. But I choose and use Pro Tools for other reasons, and Avid does seem to be ticking off the chronic complaints of users. Yeah, it bugged me too that it took them so long to implement ADC. But I just bought the Mellowmuse ATA plug-in for $49 and got on with it. And now Avid has fixed it. Bravo, Avid, for listening to your customers. Would that cellular carriers followed your example.

Link to comment
Share on other sites

  • Members

Jon, You've got to turn on Clip Gain Info.
View>Clip>Clip Gain Info
. Now a fader icon is present in the bottom left corner of each clip. Click it and adjust fader.

 

That's in PT 10, Lee. I was talking about 9 (and before). That's why when Avid announced Clip-based gain at the press conference before the AES Show, and I saw those faders, I peed a little. Mike Rivers was sitting behind me, so if he noticed, he didn't say anything. :)

Link to comment
Share on other sites

  • CMS Author

 

But, Mike, we're talking about this feature because we can't always get it recorded right. And it has less to do with being "obsessed" than fixing a level mismatch that would otherwise be distracting.

 

 

I fully understand that. There's a good reason for nearly any tool, though I'll admit that I have a lot of tools for which I've never understood the reason. Adjusting volume within the DAW track is a useful thing. The old way was just to listen to it, remember what you need to do, and move the fader at the right time when you're mixing or "mastering." Console automation helped, too.

 

On my Mackie HDR24/96, there's indeed a context-sensitive tool for adjusting the volume envelope. But it's region-based (or in Pro Tools speak, "clip based." if what you want to adjust doesn't have region boundaries at each end, you first need to make it so - a keyboard shortcut, Ctrl-Y. Do you also have to do that in PT10 or can you simply highlight a two word segment in a five minute continuous track and adjust the volume of the highlighted segment? That would be a good shortcut, but most systems aren't that smart.

 

What I was sad about was Craig's comment that most guitars are recorded direct with a simulator applied either during tracking or in mixing. This creates the problem which he pointed out that your playing just doesn't respond to the music you're playing along with in the same was as if you had a real amplifier (the complete instrument). If that's how people are learning to play and use guitars nowadays, then in a few generations we'll be running short of real guitar players.

Link to comment
Share on other sites

  • Moderators

What I was sad about was Craig's comment that most guitars are recorded direct with a simulator applied either during tracking or in mixing. This creates the problem which he pointed out that your playing just doesn't respond to the music you're playing along with in the same was as if you had a real amplifier (the complete instrument). If that's how people are learning to play and use guitars nowadays, then in a few generations we'll be running short of real guitar players.

 

 

I know! Just like how we lost the fine art of pop sheet music writing when that pesky recorder thingy came along! Kids!

Link to comment
Share on other sites

  • Members

 

if what you want to adjust doesn't have region boundaries at each end, you first need to make it so - a keyboard shortcut, Ctrl-Y. Do you also have to do that in PT10 or
can you simply highlight a two word segment in a five minute continuous track and adjust the volume of the highlighted segment?
That would be a good shortcut, but most systems aren't that smart.

 

 

In my (ahem) lavishly illustrated tutorial above, in Step 2, I show how it's done. You highlight the segment, then press a keystroke combi (in PT, it's Cntrl/Cmd-E). So not quite "simply highlighting," but almost as good.

 

 

What I was sad about was Craig's comment that most guitars are recorded direct with a simulator applied either during tracking or in mixing. This creates the problem which he pointed out that your playing just doesn't respond to the music you're playing along with in the same was as if you had a real amplifier (the complete instrument). If that's how people are learning to play and use guitars nowadays, then in a few generations we'll be running short of real guitar players.

 

 

It doesn't have to substitute for a "real" performance (with a good sound dialed up in advance, through an amp and all). It's just an additional approach. The popular technique known as "reamping" allows for a straight, dry signal (via just a direct box, or a box such as those made by Radial Engineering, which match and convert impedances and levels properly) to be recorded to a track, where all manner of gain-based and other signal processing can be applied after the fact.

 

Reamping, like amp-simming, is just another option, and not necessarily a replacement for a good, organic performance--including one in which the sound inspires the performer to play better. In fact, in an ideal reamping situation, you'd have parallel signal paths--one for the reamp track, the other running through miked amps and such for the performer to play dynamically and in the moment--the way Hendrix did it. You can monitor the just the amped track, and never even have to bother with amp-simming the reamped track. But if you do decide you need something weird, or you printed with too much distortion, you can take that same performance and reimagine it from the ground up through post-production means.

Link to comment
Share on other sites

  • Members

Mike does use a computer to do recordings. And he records digitally. I have to say that in many ways, I share some of what he feels. I record using Pro Tools (obviously...I started this thread! :D ), but I use it largely as a tape recorder.

 

Sure, I do editing. Sure, I do cutting and pasting. And I use the software synths and drum machines upon occasion. I use MIDI. I'm glad it's all there. But it's not my main interest.

 

I personally still love the sound of a great guitar going through a great amp recorded by a great microphone going through a great mic preamp, recording a great guitar line. I'm not critical of these DI sorts of approaches, particularly if someone makes something that sounds completely new and different, something that's never been heard before. But let's face it...people are mostly trying to emulate what I've just described!

 

('Course, my interest is quite often in having my guitars and keyboards, record coming out of amp, sound like something different or new, but that's besides the point here! :D ).

Link to comment
Share on other sites

  • Moderators
Mike does use a computer to do recordings. And he records digitally. I have to say that in many ways, I share some of what he feels. I record using Pro Tools (obviously...I started this thread!
:D
), but I use it largely as a tape recorder.


Sure, I do editing. Sure, I do cutting and pasting. And I use the software synths and drum machines upon occasion. I use MIDI. I'm glad it's all there. But it's not my main interest.


I personally still love the sound of a great guitar going through a great amp recorded by a great microphone going through a great mic preamp, recording a great guitar line. I'm not critical of these DI sorts of approaches, particularly if someone makes something that sounds completely new and different, something that's never been heard before. But let's face it...people are mostly trying to emulate what I've just described!


(
'Course, my interest is quite often in having my guitars and keyboards, record coming out of amp, sound like something different or new, but that's besides the point here!
:D
).



^^^ Of course!^^^ But I've used reamping and sims to save my ass many times. To dial in a sound after a hack has left my studio. To quickly get a part down when I can't light up an amp. To... whatever... Gasp! It's a tool. It's evolution. There are idiots who will misuse a hammer. But I like MY FREAKIN' HAMMER! :)

Link to comment
Share on other sites

  • Members

It's sad, really sad, that this has become "usually." How will people ever learn to play the electric guitar? But then, I suppose there are a lot of people who don't know how to play a piano because all they've ever played is an electronic keyboard.

 

Since the late 60s, when I switched over to keyboard amps for guitar, I've done everything I can to minimize variables. Tubes going soft, humidity affecting speakers, switching amps and having a completely different interaction with the guitar...those kind of things drive me nuts. My goal has been to get "my sound" in something predictable and can be used on stage or in the studio. The whole point of that is to take the things that DETRACT from playing guitar, at least from my perspective, out of the equation.

 

Tone really matters to me and I don't like having it changed arbitrarily. I want to be in control of that tone.

 

I suppose you could do that on stage with four amplifiers, in fact some probably have, perhaps not realizing that they were combining different types of distortion. But you're describing things that YOU do. You don't expect everybody to do that, do you? Otherwise you'd have to dream up some more tricks.
;)

The point is that the technology makes it easy to do this sort of thing, which again, removes the elements that interfere with playing guitar. I use it for multiband distortion; Jon uses it for consistency with vocals; someone else will figure out something else.

 

That's certainly one way of making music, but it's not my way. But then I don't expect everyone to do it my way either.

Actually what I'm talking about has nothing to do with making music, but with shaping the sounds to support the music in what, at least to my ears, is the optimal way.

 

For example, you could take what Keith Richards plays - the music - and it would sound totally wrong in a Stones song without it going through an amp, which shapes the sound of the music.

 

To be able to use sound itself as a way to enhance and support the music opens up incredible new opportunities for me, and everyone else who is into this sort of thing.

Link to comment
Share on other sites

  • Members

One other thing...the reason why I invented the buffer board back in the 70s was to eliminate the interaction between guitar and amp. That way I only had to deal with two components: Getting the pickup/strings/wood sound at the guitar, and the amplifer sound. Between the two of those, I could get the sound I wanted, and rolling off the guitar volume control did what I expected it to do, not what the amp imposed on me - and it didn't change every time I plugged into something else.

Link to comment
Share on other sites

  • Members

^^^ Of course!^^^ But I've used reamping and sims to save my ass many times. To dial in a sound after a hack has left my studio. To quickly get a part down when I can't light up an amp. To... whatever... Gasp! It's a tool. It's evolution. There are idiots who will misuse a hammer. But I like MY FREAKIN' HAMMER!
:)

 

I'm not against it at all, especially to use for odd things, creating new sounds, effin' up stuff, etc. I use amp sims too. I have an amp sim, the Vox ToneLab, that feeds my amp, and sometimes I use that, or I go direct. Sometimes I use the SansAmp plugin that comes with Pro Tools to add a little "hair" to stuff (keyboards, drums, bass, guitars, whatever). I'm not against any of it. I love it. I'm glad it's there. But I also like to acknowledge that people like Mike have preferences. He doesn't seem like he's against the advance of technology like what Craig is describing, just that he's not so interested in it himself.

 

And I think he's saddened in part because Craig is describing something that doesn't react to your playing. Is there something wrong with lamenting that a lot of people recording direct are not going to get the visceral experience with playing through an amp, and that most of 'em aren't going to go through the process Craig describes to get a more reactive, responsive sound? That doesn't sound anti-technology to me; that's just lamenting that there needs to be something more emotional and responsive and lamenting that people don't get that as much anymore. And that seems valid to me.

 

'Course, I think Craig may be describing something else entirely that he's trying to achieve.

Link to comment
Share on other sites

  • Members

And it looks like that's about right.

 

You know what intrigues me that I don't feel like it has been fully explored yet is modeling instruments. But not to emulate real-world instruments, but more to model something that could not possibly, physically exist in real life or could not be played by a human normally. This would seem to have all sorts of possibilities beyond trying to emulate someone playing a baritone saxophone or a mallet or other stuff that modeling instruments is typically used for.

Link to comment
Share on other sites

  • CMS Author

 

I know! Just like how we lost the fine art of pop sheet music writing when that pesky recorder thingy came along! Kids!

 

 

And then there's those great LP album jackets with copious notes that went away with music downloads. Now you can download an image, more copious notes, but you have to either print it out yourself or be near your computer in order to read the notes and enjoy the pictures.

 

Progess! Bah, Humbug!

Link to comment
Share on other sites

  • CMS Author

 

Reamping, like amp-simming, is just another option, and not necessarily a replacement for a good, organic performance

 

 

But if "most" guitar recording is done without the benefit of playing through an amplifier, then you don't get that feedback (mental as well as acoustic) while you're playing. This is why it takes so much manipulation to get it to sound like someone really played his soul out.

 

I can see using a simulator or re-amping a direct track with a different amplifier or different tone settings or a different effect to see if it fits better with the other tracks (that may have been recorded after the guitar part), but not as the primary means of recording. Though I guess it fits in with the contemporary music production model of "Let's record a bunch of stuff and see how we can put it together later on."

Link to comment
Share on other sites

  • CMS Author

Perhaps one reason why I feel the way I do is that I nearly always record only acoustic instruments. When I record electric guitars, it's a live show on stage and I'm not going to be able to change the sound of the guitar, for better or for worse.

 

When I record an acoustic guitar, I rarely use any EQ or compression either. If you want to sound like Tony Rice then get Tony Rice to play on your record. But I seem to remember when Craig was recording that classical guitarist several years back talking about some pretty tricky stuff he was doing, not to change the sound of the guitar, but to make it more consistent so he didn't have to fiddle around with it much after it was recorded.

Link to comment
Share on other sites

  • Members

But if "most" guitar recording is done without the benefit of playing through an amplifier, then you don't get that feedback (mental as well as acoustic) while you're playing. This is why it takes so much manipulation to get it to sound like someone really played his soul out.

 

Well, being able to get 64 samples of latency - which was only possible recently - gives me enough of a real-time feel that I get all the feedback I need. But I also work very differently live and in the studio. When recording, I'm actually not that into "feeling" the notes from an amp, I find it sort of deceives me into thinking the guitar will record better than it does...like turning up the music loud when mixing and thinking it's better, but then when you play it back at rational levels, it sounds small.

 

Live, or course, it's great to have a roaring amp because that's what the audience hears too. But in the studio, they won't feel the amp. For proof, all you need is the guitarists who hate amp sims, but can't tell the difference on playback compared to the real thing. There's a component of playing guitar that just doesn't translate into either iron oxide particles re-arranging themselves or 1s and 0s, so I'm always looking for ways to improve that translation.

 

I can see using a simulator or re-amping a direct track with a different amplifier or different tone settings or a different effect to see if it fits better with the other tracks (that may have been recorded after the guitar part), but not as the primary means of recording.

 

Frankly, after-the-fact changes of the core sound are something I never do, aside from [possibly] a minor tweak, like a little more or less brightness. To me, the sound you record with IS the sound of the music, for better or for worse. That's why I never had problems with Line 6's ToneDirect monitoring that recorded processed sound...I'd get the sound I wanted, and record it. The rest of the song would be shaped by that sound, so changing it afterward made no musical sense.

 

My entire musical ethic has been shaped by doubling on guitar and keyboard for decades (although technically, I'm a much better guitar player). Elements of each crossover with the other. For example, with keyboards I never use the mod wheel for vibrato, which as a guitarist, sounds artificial to me...I always add vibrato with the pitch bend wheel. Conversely, with guitar I often try for a smooth, modern, synthesized quality.

 

When I did my Forward Motion CD, as I used MIDI guitar several people asked which were the guitar parts, and which were the keyboard ones. I realized that if the parts sounded like MIDI guitar, they were keyboards and if they sounded like keyboards, they were MIDI guitar :)

 

I don't advocate that anyone follow this path, it's my path. I'm just happy that a) the technology exists so I can pursue it, and b) I don't have to build my own stuff any more because no company was producing what I needed.

Link to comment
Share on other sites

  • Members

 

But I seem to remember when Craig was recording that classical guitarist several years back talking about some pretty tricky stuff he was doing, not to change the sound of the guitar, but to make it more consistent so he didn't have to fiddle around with it much after it was recorded.

 

 

Not sure if you're talking about the Linda Cohen or Nestor Ausqui records, but the tricky thing you're referring to was recording in mono, and creating the stereo imaging afterward through EQ. I honestly believe that if anyone has only heard stereo-miked guitar, they've never heard what a classical guitar really sounds like because of the phase issues.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...