Harmony Central Forums

Announcement

Announcement Module
Collapse
No announcement yet.

Great trick for demoing mike placement, etc.

Page Title Module
Move Remove Collapse
X
Conversation Detail Module
Collapse
  • Filter
  • Time
  • Show
Clear All
new posts

  • Great trick for demoing mike placement, etc.

    I'd like to share a great studio technique that I call the "echo demo". I hope you find it as useful as I have.


    How do you determine optimum mike placement? By ear, of course, but how do you hear just the recorded sound without any live sound? The big-budget answer: have your assistant fiddle with the mike out in the tracking room while you listen in the control room. Well, I ain't got no assistant. I ain't got no control room. All I got is a one-room home studio with a PC and a DAW.


    So, here's what I do: I set up my DAW to send the incoming audio from the mike directly out through the monitors, without actually recording it. (In Protools, this involves pushing the track's record button, and disabling low-latency monitoring in the operations menu.) In other words, I use my DAW like a PA.

    I want to emphasize the need for caution at this point. BE CAREFUL!!! Studio mikes are very sensitive and often prone to feedback. Never put the mike close to the monitors, or point it right at them. Use a gobo, if you have one. I recommend first turning the monitor volume all the way down, then setting eveything up, then finally slowly turning up the monitors.

    Next I put a long delay plug-in on the master track and set it for 3-4 seconds. Now the fun begins. I hold the mike up to the instrument (or amp) and have the musician play a short test riff (perhaps an arpeggio). Then we listen to the riff echo in the monitors. I move the mike around and he/she plays the exact same riff, and we listen again. Over & over until we zero in on the best possible sound. Notice I said "we". A big advantage the echo demo has over the control-room method is that the musician can offer his feedback, too.

    The preceeding example is for a melody instrument like a sax. For brief percussion sounds, say a cowbell, using a shorter delay greatly speeds up the process.

    Now, there's just one little problem. A sensitive mike will pick up the monitor echo and send it off to the DAW, to be sent out through the monitors again as a second, fainter echo. These secondary echoes can be confusing and annoying. There's three ways to get rid of them:

    1. Use headphones.

    2. If by chance you have a mike muting footswitch (called a "cough drop" or a "short stop"), just mute the mike during the echo.

    3. The best solution: use a gate. Put it on the master track, right before your delay plug-in. Set the threshhold low enough for the test riff to get through, but high enough to keep out the monitor echo. Set your attack as fast as possible, so you don't chop off the instrument's attack. Give yourself lots of hold and decay. Set them long enough so you don't chop off the end of the riff, but short enough so that the gate shuts down before the echo comes back. For a 2-second test riff through a 3-second delay, try setting each to a half-second. Ideally, use a gate with a sidechain, and send a slightly-less-delayed signal to the side chain, so that the gate opens up a bit before the note.

    You can demo multiple-mike setups this way, too. Just set up your DAW to output all the mikes. The one gate and one delay will handle everything, though you may have to adjust the gate threshhold and/or the monitor volume. With the plug-ins on the master track, I think of it as gating/delaying the speakers, not the mikes.

    Here's a fun idea for rhythm instruments. Set the delay to correspond to a common tempo, say 2 seconds for one bar of 120 bpm. Get the musician to alternate playing a bar and listening for a bar. He can often fall into a groove and "jam with himself". You may need to use a shorter decay on the gate. You can also double the delay (use a second plug-in if you have to) and play/listen for two bars. This transforms a tedious process into something fun and helps keep your clients fresh for recording.

    Okay, now that you've determined how best to place a given mike for a given instrument, how do you choose which mike to use? You demo them, of course, but how do you listen back? The usual way is to record the musician with several mikes at once, then loop the recorded results, using the solo buttons to switch back and forth between samples. Workable, but cumbersome.

    Let's use echo-demoing: First, set up your DAW so that each mike has its own track, output directly to the monitors as before. Then put a gate and a delay on the master track, as before. Then put another delay on the second mike track, two delays on the third, three on the fourth, etc. Now each mike has its own unique delay time. Play a riff, and you will hear a series of echoes, one for each mike. This allows you to instantly evaluate your mikes. You can even tweak the mike placement further at this point. Be sure to play both high and low riffs, to test the full range of the instrument.

    One more application: demoing effects. To do this, set up as for demoing mike placement, and add a send to an aux track. Put your effect on the aux track, along with another delay. Now when you play a riff, you'll hear a raw echo followed by a processed echo. If you want to compare different settings, set up several aux tracks, each with its own effects chain. Put one delay on the second, two on the third, etc. Remember to mute the raw signal. Now you'll hear a series of echoes, each one processed differently. This is especially useful for experimenting with the order of plug-ins.

    Echo-demoing really excels in situations where the musician is also the recording engineer. A musician with a home studio can learn a lot about recording himself. It also works for experimenting with effects used for his live sound. But what if you are in a full studio and you want to take advantage of the speed and directness of echo-demoing? You have two choices: put the musician and the mike(s) in the control room, or put your monitors in the tracking room. Which way you go depends on the particulars of your set-up.

    These are just some of the uses of this method. It's a great way to explore the potential of your studio. Any time you want to experiment with different options and quickly evaluate the results, consider the echo demo.

    alt-tuner - a microtonal midi plug-in<br>www.TallKite.com/alt-tuner.html<br>https://soundcloud.com/tallkite<br>http://www.youtube.com/channel/UCn9-hNMzRuVY3RsWjrGsz_g

  • #2

    Not sure I understand the benifitc of the echo.

    With the direct monitoring shot off you are listening to the processed signal. Any delay there is the latency buffer settings of the computer/daw. What you hear the the processed signal and the sound quality of the converters converting the signal from Analog> digital, Then Digital > back to analog.

    Its true you can use the mic position to get the best sound quality from your converters this way, but beyond that, you really have no reference to work from other than your own instincts and what you hear.

    If you were using two mics on the same source then using this method in mono can help you tune in both mics for minimal phasing. With a single mic its going to vary depending on the quality of your converters and the type of mics used.

    If you are tracking many instruments at once, and those mics are all at different diatances, "And" you have bleedover which is an occurance in small rooms without isolated booths set up, then you can have all kinds of phasing issues between mics that will have to be fixed with shifting tracks a few us either way.

    Some tonal losses may occur in the process but I've found that having my mics at equal distances from cabs when recording a live band, is much better than having to fix all the crazy phase issues after tracking. Drum mics are especially a problem because thay pick up allot of bleedover.

    The small tonal losses can be fied mixing in most cases eqing the mix, Phasing is a much bigger tone killer because it may occur on some notes and not on others. If you keep phase issues minimal, the small differences in optimal mic positions dont amount to much in comparison, but they are collkective so you do what you can do. I've adopted using mics angled at cones in degrees to get the best tones, and dialing up good tones from the head. I dont use mic distance because my studio is super dead and I gain no benifits from room ambiance.

    In your case, micing a sax is more a matter of finding the right mic for the job. They do make several clipons that do a really good job. I prefer this because you dont have the problems with changing tone and dynamics as the sax player moves around. I tried a dozen mics last time I tracked a sax player and wound up using a micro condencer with a mic holder clipped to the sax. It picked up a little finger action and stoppers opening and closing but the tones I got wer solid and workable in a mix, 

    How the tracks sound in a mix (not solo) is all that counts. If the bass is tracked direct, it can be the foundation for all your other tracks as a reference point to build upon. If some tracks cant be boosted enough to compete with bass then your proceedure may have relevance in getting a solid solo tone happening. Or you would have to weaken the bass to match the other tracks, which isnt what you want to do but is sometimes the case.

    If anything what I'd suggest, is instead of adding echo to the tracks, download a free audio analyzer like Voxengo span. Its a stereo analizer, so it can overlay two inputs.

    Split your mic signal. Run one channel in dry with direct monitoring going.

    Take the second channel, and use your method minus the echo and monitor the processed signal.

    Put Voxengo Span in the mains effect bus and assign the dry and processed signal to the plugin panned hard right and left.

    Then in Span select a stereo source and have both waveforms going.

    Next you'll need to match the gain levels of both so the waves peak about the same.

    From there you can test your micing positions to see if theres really any differences between the two. Chances are the direct and processed signal will have some gain related differences, but the changes when you move the mic will occur equally on both. This is my point about positioning needing a point of reference.

    Now if you used two identical mics, one direct and one processed, then did this comparison moving only one, then you'd have some reference point to compare the two. In the first case you are using your ears and memory of what sounds good which could be good or bad depending on your experience. In the second you at least have a solid reference point minus the mic tolerances. Even then, again, its a matter of how it fits in a mix.

    A better method is to use good headphones and dial up the tones you need using the amps EQ, gain settings, and choosing the right mic. You can use the monitors but again, its a matter of reference. The goal of a good track is not always about how good the instrument sounds solo. Maybe an acoustic instrument which has a wide frequency response can be diaaled up for a high fidelity sound, but it takes allot fo experience to dial up other instruments that have limited frequency responces when placed in a mix.

    I spend decades getting the right gear, right mics, and amp setting etc so I could track instrument and not have to do allot of RX to get them to fit in a mix. I found gain staging a much bigger factor in this process. The ears perceicve frequencies differently at different volume levels. They hear more bass as gain increases but in reality, what a mic hears is pretty much the same.

    In your case, the distance of a mic from a source will change frequency response but thats caused by the mics proxcimity effect. The bass boost of a close mic vs a mics boosted mids further away can easily be accomplished using an EQ in the box. I prefer to have the mic closer for more detail and dial out that detail as needed building a mix.

    I do however do allot of direct recording using guitar preamps and DI boxes. I have in the past used the processed signal to judge the best tonal responce based on preamp settings. I have my drivers set so I have a wide range of flexibility with my input signals. I can record a weak signal and boost it in the box and pretty much match a stronger signal. I can track using my monitors without any feedback this way so I can compare any extreme I want.

    In the past few years I've been tracking at lower levels. It was hard to get over the hotter tracking levels I used recording analog for so many years. In analog, you'd do something similur to what you're doing except you'd monitor the direct signal and compare it to the playback head. You could then check the tape saturation vs the direct and tweak gains and eq to get the best taped sound.

    In digital its strictly the converters and preamp. Since digital has a good 100db of headroom with a very low noise floor, the differences between tracking hotter and cooler doesnt have the sweet spot analog gear did. Theres a little with the preamp finding a good rms level and mic position, but so long as your daw meters are running between 25~50% you should have plenty there to work with.

    If you have allot of room ambiance/reverb, then mic position can have a huge difference on what a mic captures. It can be good or bad depending on the resonance quality and how it fits in a mix. I do encourage you to use your experiments to get to know your gear, your technique and how it works in a mix. These kinds of things give you lifelong experience you need to have to get good at recording. Just keep in mind, its also a science and relating the hands on to the science in back of what you are doing can reinforce what you learn and expand what you may not have realized exhisted.

    I got a degree in electronics many years ago and I'm constantly rediscovering the correlation between the science and the hands on. The cool part is the formulas used in acoustics, electronics, and music are all the same. You simply plug in different values to get different answers. Its all physics and its all basic science. my only regret is I had crappy math teachers in school who had no gift to inspire others how the dry math could be used in all the industries including music. If I had only known the value of algebra and music I would have sucked it in back in grade school instead of squeeking by. Once I got into electronics I had allot of catching up to do.

    Comment


    • TallKite
      TallKite commented
      Editing a comment

      Hi WRGKMC, thanks for reading. You make a lot of good points about cabs and cones and sax mikes, etc. You are clearly very knowledgable.

       

      "Not sure I understand the benifitc of the echo." It lets you separate the recorded sound from the live sound. Let me make something clear: the echo effect is not part of the finished recording. It's just a way to immensely speed up the process of making a test recording and listening back to it.

       

      "With the direct monitoring shot off you are listening to the processed signal." There is no processing, except for the inevitable digitizing. What you're listening to is exactly the same as what you would record. Instead of ones and zeros being stored on a hard drive for a minute or a day or a year, they are stored in computer RAM for a few seconds. Digital is digital, hard drives sound the same as RAM.

       

      "Any delay there is the latency buffer settings of the computer/daw." I'm suggesting deliberately adding 3-5 seconds of delay on top of the DAW's latency.

       

      "What you hear the the processed signal and the sound quality of the converters converting the signal from Analog> digital, Then Digital > back to analog." Again, there is no processing, just a delay. You will *always* hear the sound quality of the ADC and DAC converters. That's the inevitable by-product of recording on digital media.

       

      Now you could argue that the delay somehow colors the sound. I don't buy it. You can write a delay in Jesusonic (part of Reaper), a program maybe 10 lines long, that very obviously simply copies the incoming signal to RAM, and sends it out again later. You can't hear RAM. You could argue that the delays currently available color the sound somehow. That just means that a colorless delay is a good thing to have. I don't think you can argue that delaying a signal inevitably colors it.

       

      "If you were using two mics on the same source then using this method in mono can help you tune in both mics for minimal phasing." Good point, you can use this method to check for phase cancellation right away.

       

      "With a single mic its going to vary depending on the quality of your converters and the type of mics used." I'm really not getting your point here. (Forums can be confusing, eh?) You're using the same converters with this method as when you record normally.

       

      "If you are tracking many instruments at once, and those mics are all at different diatances, "And" you have bleedover which is an occurance in small rooms without isolated booths set up, then you can have all kinds of phasing issues between mics that will have to be fixed with shifting tracks a few us either way." Another good point. Taken to an extreme, you could echo-demo the whole band. Have them play four bars, then stop. You then listen to the whole four bars, checking for phasing, and then you adjust the mikes accordingly. This is probably more hassle than it's worth, because it's easier to just shift the tracks as you say.


      As for checking for bleed, you could set up the DAW to echo each mike, or a group of mikes, separately. Example: you have the band play 4 bars, then stop. The guitarist's mike echoes first, and you listen for bleed. Then 4 bars later the drummers' mikes echo, and you check for bleed in his mikes. And so on. (So you have something like 30 seconds delay on some tracks. Hey, it's the digital age, RAM and CPU are cheap! Just stack 8 delay units in the FX rack.) Then you fiddlle with the mikes and the gobos, the band plays 4 bars again, etc. For most setups, probably not worth the hassle, but in certain situations, cramming a big band in a small studio, this could be a life-saver.

       

      "If anything what I'd suggest, is instead of adding echo to the tracks, download a free audio analyzer like Voxengo span. Its a stereo analizer, so it can overlay two inputs.

      Split your mic signal. Run one channel in dry with direct monitoring going.

      Take the second channel, and use your method minus the echo and monitor the processed signal."

      Not following. What processed signal? The only processing is the delay, which you say to leave out.

Working...
X