Jump to content

Fletcher-Munson: Our most sensitive freqs are also the most potentially obnoxious?


rasputin1963

Recommended Posts

  • Members

On the famous Fletcher-Munson curve, we humans have most sensitivity to frequencies in the 3kHz---5kHz region. I guess we just evolved that way, because these would be the frequencies of most use to us, when we went out hunting & gathering and looking for sabretooth tigers? :p Probably of most use in hearing human speech and the crying of babies?:whisper:

 

Is this band of frequencies also the ones that can become potentially the most obnoxious/fatiguing in recorded music.... smiley-mad if they are raised too high in a mix, or otherwise not judiciously EQ'd?

Link to comment
Share on other sites

  • Members

I view it a little differently. The ears are highly selective in those ranges and can hear small changes in frequency response and amplitude. You can jam quite a few instruments in that range and the ears can decipher whet they are.

This graph shows the ears sensitivity. The sound pressure has to hit the line before its perceived by most people.

 

220px-Perceived_Human_Hearing.svg.png

 

 

 

 

Now add in what happens with multiple instruments with similar frequency ranges.

In this graph you have one loud instrument with a Wide Q which overshadows another instrument which is lower in volume. Say the tall peak is guitar and the lower one is bass.

 

The trick to fixing this issue would be to EQ the "masker" with a narrow Q. You can roll off the lows with a sharp hi pass filter below 250Hz, then lower its level and this should expose the Masked sound. If you had another instrument above the guitar then you may want to use a Low pass filter to limits the high side and allow each instrument to exist in its own range within the entire mix.

 

1280px-Audio_Mask_Graph.png

Bass frequencies are felt more than they are heard and they tend to pass through objects. Small changes in bass frequency or amplitude aren’t heard easily

 

High Frequencies are highly directional. They tend to travel in a straight line and are easily absorbed by objects in their path. The ears use them to find the origin and direction of sound. They are key in getting a clear stereo image.

 

The thing that screws many up however is using tools used to manipulate audio.

 

Those tools aren’t based in acoustic or psychoacoustic science, they are based in electronics. One example is a typical EQ or Frequency graph. You have to realize the tools designed for manipulating audio signals are based on "electrical science", not "acoustic science". The Physics behind both use many similar mathematical formulas but it’s nowhere near a 1:1 match. You have to learn both and bridge the difference between the two using your understanding of both.

 

Here's on example. I'm not going to go real deep here but this may point out what you have to deal with. (I’ll skip the digital because they simply mimic what analog does, even though they really don’t have to. They go out of their way making digital tools work like analog and in my book, they would do better designing them how the ears actually respond).

 

This graph shows you a typical analog EQ response. Do you ever wonder why the frequencies are bunch up to the right? Hum, Could it be because of how our ears work?

 

The curve is created by a Cap, Resistor (and Coil in high quality EQ's) and is symmetric. The component response on an electrical signal don't exactly mirror the acoustic vibrations on the ear and how the ears actually respond to changes in frequency and amplitude.

 

A typical EQ notch which either cuts or boosts a signal like this on a non linear graph...

 

fetch?id=31586999

 

... is not going to produce the same results on either side of the peak. To the left of the peak, you have a range of 600hz being affected. To the right, you can have between 1~2k being affected in the range where the ears are most sensitive.

 

These peaks look visually symmetrically appealing to the eyes but your eyes don't see sound. This graph depicts ho the electronic components work, not how the ears actually respond.

 

Using a smooth ramp on the bass side then a sharper cutoff on the highs to make the curve more electronically linear can yield some interesting results and can more natural in many cases.

 

fetch?id=31587000

 

 

 

 

 

Link to comment
Share on other sites

  • Members

David, you might be interested in this NPR radio (transcription with clips, also) piece on the evolution of human hearing... and, in fact, how vibration sensation -- that most basic of perceptions -- grew to be a fundament of the way our brains evolved...

 

How Sound Shaped The Evolution Of Your Brain​

 

http://www.npr.org/sections/health-s...-of-your-brain

Link to comment
Share on other sites

  • Members

 

Is this band of frequencies also the ones that can become potentially the most obnoxious/fatiguing in recorded music.... smiley-mad if they are raised too high in a mix, or otherwise not judiciously EQ'd?

 

I look at it from the other side of the "is the glass half full or half empty" question although I'd say that range was 300-3KHz. Start with that as your standard and then balance around it.

 

 

Link to comment
Share on other sites

  • Members

For me excessive high and low frequencies are the most fatiguing. Especially high frequencies. I think part of the reason people like analog tape is because it emphasizes mid range frequencies. I really like the sound of 15 IPS tape.

 

That low-mid frequency bump added a lot of character to late sixties early seventies music. You can hear the switch to 30 IPS in a lot of records starting sometime in the mid seventies.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...