Jump to content

Homerecording VS EQ


DigitMus

Recommended Posts

  • Members

There is something for checking phase in stuff (I can't remember what this is called), and of course, you can look at the waveforms on your DAW as well. You can also listen to your audio by listening in mono and seeing if it suddenly grows thinner. If it does, move the mic(s) around (or if you've already recorded it, the waveforms) or use a polarity reversal thingy that come on a lot of plug-ins and see if that doesn't help.

Link to comment
Share on other sites

  • Members

Plus, Recording is my favorite. It's the one I subscribe to. I think his opinions should be noted by our fearless party master. Longer is better. The more technical the better. I want 2500 word articles about phase with a bunch of mathematical equations to ponder. I want it to make my head hurt. I learned everything I know about audio from Mike. ;)

Link to comment
Share on other sites

  • CMS Author

 

First, I have read the thread but I haven't read the article.

 

Neither have I, completely, and I'm not even sure if I'm responding to blue, someone who wrote a letter to EM, or to the EM author, but the following "error" may or may not be an error:

 

 

In the May 2007 Electronic Musician, well-known recording/keyboard writer Jim Aikin "corrected" a reader who had written in to complain about an Aikin article in a previous edition that claimed there was no relationship between sample rate and latency (among other inaccuracies). Aiken chided him, saying, "'there is no direct relationship between sampling frequency and system latency" -- and editor Steve Openheimer compounded the error by aggreeing with Aikin that "latency is not related to sampling rate."


Which anyone who understands the basics of digital recording should either know or quickly be able to figure out is simply not true.


And it took YET ANOTHER letter -- complete with elementary common sense logic and some supporting basic sampling math -- to get them to retract that
obviously
incorrect info.


Pretty freakin' amazing, I thought.

 

The real answer is not simple. First of all, you need to agree on the meaning of "relationship" (between latency and sample rate). Is it directly proportional? Is it approximately directly proportional? is it just different at different sample rates? And what latency are you talking about?

 

I don't play virtual instruments from keyboards, so I don't care if the combined latency of the keyboard, the MIDI data transfer and conversion, the sample playing process, and the D/A conversion doesn't mean anything to me. But it certainly means something to the person pressing the key and waiting for the sound to come out.

 

However, I do record a lot of acoustic music and singers, and what matters to me is monitor latency - the time that it takes for a signal to get from the microphone input to the headphone output. In an analog console, assuming that you monitor the input and not the return from a DAW (or the playback head of a tape deck) that time is as close to zero as the speed of electrical transmission will permit. Only sticklers would quibble about calling it "zero." But when there's a digital path involved, there will be a finite and easily measurable delay. This is what's called "latency" and it's the sum of several different delays.

 

It turns out that this delay is not independent of sample rate, and is nearly always less at higher sample rates, but depending on the design, it can be either nearly directly proportional to sample rate (if it's 3.00 ms at 48 kHz, it's 1.55 ms at 96 kHz, 0.79 ms at 192 kHz)), or it can just be different (3.0 @ 48, 2.5 @ 96, 2.1 @ 192).

 

What most affects the delay through a digital audio path is the delay through the anti-aliasing filter on the input and the smoothing filter at the output. The actual time between when you grab a sample and then have to grab the next one is enough faster than the sample rate that this is nearly clock-independent. The fact that you're working with twice as many samples in a day at a higher rate is irrelevant here unless the computer can't keep up. The digital filters, however, are most often implemented with a series of delays with the signal fed back through them. The steeper the filter slope, the greater the delay through the filter at any given clock rate.

 

The input filter must be very steep. You want to make absolutely sure that nothing higher than just a tad below half the sample rate gets sampled. The output filter can be (and usually is) more gentle. If you're sampling at 96 kHz and you can't hear anything higher than 25 kHz, you can use a fairly gentle slope to get rid of what's above 25 kHz. So the delay of the output filter can be (and often is) faster than the input filter. This is why throughput delay is never directly proportional to sample rate.

 

However, the two are definitely related. I know of no A/D/A that has the same throughput delay at all sample rates. However, I have looked at and tested equipment that has a delay of the type I'm most interested in that's in the ballpark of 2.5 ms at 48 kHz and 1.8 ms at 96 kHz. It's related, but not in direct proportion. I haven't tried to find a higher order equation that relates them, but I'm sure it wouldn't apply to every design.

 

Of course when you're talking about the latency that you can adjust by adjusting buffer size so that you don't get clicks, that's just a matter of making the computer work - you increase the delay until it works, and this is also related to, but not directly proportional to sample rate.

 

So, who's right here? Nobody. Not the author, not the editor, and not the letter writer. Any or all of them may actually know the answer, but none had enough space to explain it. The real error (I guess - I didn't see it) in the original article was to declare absolutely that there was no relation. There are few absolutes in this technology, and the author had less space in which to make his point than I've taken here.

 

The letter writer had a legitimate observation - that sometimes there is a dependence between latency and sample rate. Depending on the specific design, it may or may not be significant, and depending the application, it may or may not be significant. Delays in the 2-3 ms range can make your voice in the headphones sound funny. But what difference does it make to a keyboard player if the sound comes out 0.516 seconds or 0.518 seconds after he hits the key? He can't play very well either way.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...