Jump to content

No! 20Hz----20kHz is NOT the range of human hearing!


rasputin1963

Recommended Posts

  • Replies 148
  • Created
  • Last Reply
  • Members
Well, listening is the final destination (at least if we sort of containerize everything that happens after sound wave meets ear), the receipt of product, if you will, but just as many a craftsman extends his own visual and spatial perception with various measuring tools in order to help assure that his eye hasn't led him astray, as well as to
refine
his knowledge and understanding of the physical elements before him, so, too, does a smart recordist not willfully throw aside the potentially more precise and almost certainly more consistent and reliable measurements provided by good measurement and test gear.



I can appreciate that for something like designing a device or background knowledge to apply to a process. However, in real life I don't look at or measure sound with anything other than my ears when playing, recording or mastering music, except of course VU meters. I don't even use guitar tuners... they don't do as good a job as my ears.

I'll use certain tools to test and aid in setup of a system or a room. But once that's done I'm all ears. ;)

Link to comment
Share on other sites

  • CMS Author

 

Training can
help
reduce the effects of cognitive biases but you can't really
learn
your way
free
from them. They are part of how we're built.


But at the point where you're carefully picking your subject base, you are imposing
other
biases on any potential test results that will greatly limit their extrapolative value. The ideal, in order to be able to derive potentially valid extrapolation, is a
random sample
within the target population you want to be able to derive conclusions about.

 

Isn't that a little bit like asking, if you want to measure the noise in a factory, to take the average reading from 100 Radio Shack SPL meters rather than take one reading from a certified B&K meter?

 

The Fletcher-Munson loudness curves were derived by asking people to rate loudness on a scale (1 to 10 or something - I wasn't there) but these were ordinary people. And peoples' ranges were different. What's a 5 if someone hasn't taught you what a 10 is, and what a 1 is? Can anyone really say that one sound is twice as loud as another? What's "twice?"

 

But you can teach someone what a certain kind of distortion (and I'm not just talking about clipping or added harmonics, but including things like data reduction, but not all at once, please) and then you can determine at what level they can begin to recognize that distortion. You'll get better results from people who are attuned to what you're trying to measure than if you just take 100 people off the street and say "hold up a finger when it starts sounding funny."

Link to comment
Share on other sites

  • CMS Author

 

I don't know about all this sixth sense utrasonic alien ear mumbo jumbo.


But I know I like 24 bit 48K worlds better than 16 bit 44.1.

 

Do you know why? Could you consistently identify each one in a test? Could you teach me how to identify them?

 

Honestly, I think it has more to do with how the DAWs process summing and fx, than anything.

 

That's a whole other issue, and one that has pretty much gone away, just like the "you have to record as close to full scale as possible" tale that was valid in the days of 16 bit converters that were accurate to about 12 bits.

 

However, I do wonder how much creedance there is to the notion of say a 40khz signal affecting a 10khz signal. The reason I consider it, is that I know I can have bass line centered at 125hz, but when you boost the 500hz it sounds like more 125 without having to actually increase that energy.

 

Another red herring. You aren't boosting 500 Hz, you're boosting more at 500 Hz than 125 Hz, but you're boosting the fundamental range as well as the overtones. And if you boost the overtones, they're reinforcing the fundamental, so it sounds louder. This was an old trick to make synth bass sound good on cheap speakers. You copied the (MIDI) track and transposed it up an octave, then mixed a little of that in. Or found a patch with more 2nd harmonic.

... jive omitted...

 

And I'm not fond of SRC, I don't trust it. Down the road if given unlimited hd space, I'd probably bump it up to 88.2. As many have noted that converts nicely to 44.1, though its seems like a big space investment for me to diddle with now.

 

Another misunderstanding. Sample rate conversion like that is inaccurate. Sample rate conversion by resampling is pretty accurate. Another "discovery" that makes things that we used to try to avoid pretty transparent today.

 

But then, not everyone does it right.

Link to comment
Share on other sites

  • Members

I can appreciate that for something like designing a device or background knowledge to apply to a process. However, in real life I don't look at or measure sound with anything other than my ears when playing, recording or mastering music, except of course VU meters. I don't even use guitar tuners... they don't do as good a job as my ears.


I'll use certain tools to test and aid in setup of a system or a room. But once that's done I'm all ears.
;)

Oh, yeah, I tend to think of setting up and maintaining my gear and making music as two very different activities. Just as in the old days, we had to switch hats and mental spaces when we moved from calibrating the multitrack to making the music, it's important to use the right tools (physical, virtual, or strictly mental) for a given job.

 

When I'm testing and setting up gear, I tend to rely on objective measurement (and use my hearing as a sort of sumcheck, of course).

 

When I'm making music, I'm almost completely intuition driven. If a given section feels like it should have 15 bars, say, I don't go looking for some sort of bar-borrowing rationale, I jump in and follow my intuition. (Of course, I'm often pleasantly surprised to see that extra bar pop up somewhere else, apparently intuition driven.)

Link to comment
Share on other sites

  • Members

 

Isn't that a little bit like asking, if you want to measure the noise in a factory, to take the average reading from 100 Radio Shack SPL meters rather than take one reading from a certified B&K meter?


The Fletcher-Munson loudness curves were derived by asking people to rate loudness on a scale (1 to 10 or something - I wasn't there) but these were ordinary people. And peoples' ranges were different. What's a 5 if someone hasn't taught you what a 10 is, and what a 1 is? Can anyone really say that one sound is twice as loud as another? What's "twice?"


But you can teach someone what a certain kind of distortion (and I'm not just talking about clipping or added harmonics, but including things like data reduction, but not all at once, please) and then you can determine at what level they can begin to recognize that distortion. You'll get better results from people who are attuned to what you're trying to measure than if you just take 100 people off the street and say "hold up a finger when it starts sounding funny."

 

 

It sounds like you're not talking about testing designed to determine the range of various human perceptions -- which was the original topic here -- but rather some sort of purpose-oriented focus group that implements double blind testing in order to better pin down a group consensus evaluation on some piece of gear or process within the framework of whatever training or focus is imposed.

 

Those two endeavors are at seemingly veritable extremes in terms of intent and unified, perhaps, only by their presumed use of double blind testing.

 

The first is a research oriented endeavor. The second, some form of applied science presumably oriented to aid in decision making.

Link to comment
Share on other sites

  • Members

The difference between 20 bit and 24 bit
AD
is, for most of our gear, inconsequential to pretty much non-existent.


20 bit AD affords 120 dB S/N. Few of us here have an input chain anywhere near quiet enough to 'take full advantage' of that.

 

 

Actually, 120dB is the theoretical maximum, but even today's electronics don't come anywhere close due to the noise floor, the fact that laser-trimming components still has tolerances, unavoidable non-linearities, 1/f noise, etc. This is why 16-bit converters have a typical S/N ratio of around 86dB, not the 96dB theoretical maximum. Or as the old saying goes, "For 16-bit conversion, buy a 20-bit converter."

Link to comment
Share on other sites

  • Members

The interesting thing to me that no one ever considers here is the problem with listening to high frequencies in stereo - you get terrible comb filtering between the speakers if you aren't absolutely perfectly centered between them (i.e. with your head in a vise). And you get some comb filtering even if you are perfectly centered. This is due to having 2 speakers alone, and is in addition to any problems caused by the room.

This can be easily be proven to the non theory types by playing a high frequncy test tone (above ~12kHz) in stereo and moving your head slightly. The level varies tremendously. Repeat it in mono. Try different frequencies.

High frequencies just don't work well in stereo, so if you want to enjoy that hypersonic stuff, you'd better listen in mono.


Oh, and another note - you need exactly 3 samples (not 2) to define everything about a sin wave (amplitude, frequency and phase); having more than 3 will not add any information or accuracy. 2 samples is exactly at the Nyquist frequency, and you need to be below Nyquist, which means at least 3 samples.

drewfx

Link to comment
Share on other sites

  • CMS Author

 

It sounds like you're
not
talking about testing designed to determine the range of various human perceptions -- which was the original topic here -- but rather some sort of
purpose-oriented focus group
that implements double blind testing in order to better pin down a
group consensus evaluation
on some piece of gear or process within the framework of whatever training or focus is imposed.

 

Not necessarily. I think perhaps we (or maybe just me) are losing sight of the original question. I think Jeff may have steered us off course when he asked about a headphones amplifier that had a specified frequency response well above 20 kHz. Which is the real question here:

 

Does "testing" always have to have "double blind" appended to it? It's probably the best test, but simply "do you hear it or not?" is probably what people really want to know.

Link to comment
Share on other sites

  • Members

Do you know why? Could you consistently identify each one in a test? Could you teach me how to identify them?


That's a whole other issue, and one that has pretty much gone away, just like the "you have to record as close to full scale as possible" tale that was valid in the days of 16 bit converters that were accurate to about 12 bits.


Another red herring. You aren't boosting 500 Hz, you're boosting more at 500 Hz than 125 Hz, but you're boosting the fundamental range as well as the overtones. And if you boost the overtones, they're reinforcing the fundamental, so it sounds louder. This was an old trick to make synth bass sound good on cheap speakers. You copied the (MIDI) track and transposed it up an octave, then mixed a little of that in. Or found a patch with more 2nd harmonic.

... jive omitted...


Another misunderstanding. Sample rate conversion like that is inaccurate. Sample rate conversion by resampling is pretty accurate. Another "discovery" that makes things that we used to try to avoid pretty transparent today.


But then, not everyone does it right.

 

 

Well. What to say other than, it seems like you're disagreeing with me, but I can't see where you've stated anything contrary to what I said.

 

The whole 125Hz with a 500Hz boost, well you say I'm wrong, but then repeat what I said(boosting the harmonics reinforces the fundy). I'm confused. So do you agree or not, when you bump 500Hz the 125hz gets bigger? As to why it happens, I don't have all the details, but I know some of it is pure math and some has to do with psycoacoustic effects.

 

We'll have to just leave it at disagree, with the whole summing stuff. Because honestly I'm no expert nor am I claiming the conjecture (thinking out loud) I posted is nothing more than that. But I know that giant sessions with loads of plug ins work better at higher sample and bit rates. They just do.

 

With SRC well they keep getting better all the time, my argument is if they didn't need to they wouldn't.

Example. I can load a Cd into my computer and play in itunes, the I can import it into my library at 16bit 44.1K and play. I can hear it degrade, not enough to bug me but I can. This simple ripping procedure has a negative impact, and it gets worse when you start converting to so called lossless codecs and/or mp3. In a DAW scenario, I'm sorry but I can hear the difference when a file has been converted from 24 / 48 to 16 /44. Could I tell if I didn't have both versions to play side by side, probably not.

Link to comment
Share on other sites

  • Members


Actually, 120dB is the
theoretical
maximum, but even today's electronics don't come anywhere close due to the noise floor, the fact that laser-trimming components still has tolerances, unavoidable non-linearities, 1/f noise, etc. This is why 16-bit converters have a typical S/N ratio of around 86dB, not the 96dB theoretical maximum. Or as the old saying goes, "For 16-bit conversion, buy a 20-bit converter."

 

 

 

state of the art converters:

 

available dynamic range for various gain settings:

 

122dB dynamic range at 21dB micpre gain

111dB dynamic range at 40dB gain setting

091dB dynamic range at 60dB gain setting

 

what it says:

 

you have 20+ bits noise floor at micpre gain of 21dB

you have 18+ bits noise floor at micpre gain of 40dB

you have 15+ bits noise floor at micpre gain of 60dB

Link to comment
Share on other sites

  • CMS Author

 

With SRC well they keep getting better all the time, my argument is if they didn't need to they wouldn't.

 

Well, sample rate conversion is going to be necessary for some time now. People, for whatever reason, want to use higher sample rates to record, process, and mix, than what the end user can play. So why not try to do it better so the product delivered to the end user is closer to what you heard when you mixed it?

 

Example. I can load a Cd into my computer and play in itunes, the I can import it into my library at 16bit 44.1K and play. I can hear it degrade, not enough to bug me but I can.

 

What are you comparing? The audio out of your CD player versus the audio out of your computer? Are you listening to both through the same D/A converter? Through the same speakers?

 

If you're playing the CD in the drive in your comptuer, then ripping it and playing the WAV file, what you're hearing is a difference in how error correction is handled in real time (with the computer as the CD player) and error correction of the file as "ripped." The file you play after ripping is in reality more accurate than what you're hearing when playing the CD in real time. Why you believe it's degraded is how you perceive it. But then sometimes we like things better when they're degraded in certain ways. Perhaps that's the psychoacoustics of which you speak.

 

This simple ripping procedure has a negative impact, and it gets worse when you start converting to so called lossless codecs and/or mp3.

 

Now that's something that I can understand, and have no argument with. However, the reality of the situation is that while you may care, most of the rest of the world doesn't. For one thing, they have no reference for comparison. For another, it sounds OK. It doesn't have flutter, it has all the frequency response they can hear on their computer speakers or ear buds, and it's not unreasonably distorted. The fact that it doesn't sound like the original is of no consequence to them because they've never heard the original. Unless of course they have the same CD you ripped it from. But then they probably already know about the degradation of low bit rate encoding and the inaccurate response of their playback system.

 

In a DAW scenario, I'm sorry but I can hear the difference when a file has been converted from 24 / 48 to 16 /44. Could I tell if I didn't have both versions to play side by side, probably not.

 

Exactly. That you can tell a diffreence is a good thing. That you recognize that it's not a drastic difference and it won't cause you to get fired is also a good thing. So what's the problem? You don't like that I'm agreeing with you? I'm only trying to explain what you didn't - why you might be hearing a difference.

Link to comment
Share on other sites

  • Members

 

But then sometimes we like things better when they're degraded in certain ways.

 

 

Mike, while I respect your opinions and you are obviously well informed.

Please keep your naughty naughty cheerleader fantasies out of the thread..unless of course you are willing to upload erotic charcoal sketches illustrating your point.

It's just too frustrating otherwise

Link to comment
Share on other sites

  • Members

Actually, 120dB is the
theoretical
maximum, but even today's electronics don't come anywhere close due to the noise floor, the fact that laser-trimming components still has tolerances, unavoidable non-linearities, 1/f noise, etc. This is why 16-bit converters have a typical S/N ratio of around 86dB, not the 96dB theoretical maximum. Or as the old saying goes, "For 16-bit conversion, buy a 20-bit converter."

Of course. I would have said all that but I didn't want to seem pedantic.

 

 

 

 

:D

Link to comment
Share on other sites

  • Members
I can assure you without question that the Creator of the Universe does.



But I'm pretty sure he hasn't ever weighed in on audio technology - though many people seem to have religious convictions regarding it nonetheless. :)

drewfx

Link to comment
Share on other sites

  • Members

This is basically what I was trying to say for like 30 hours when we were discussing the methods used by Meyer and Moran in their Audibility of a CD-Standard A/DA/A Loop... study. :poke:
;)

 

Now hold in a minute there partner, you have it exactly backwards. :D

 

First, the best way to assess transparency is by measuring. But some audiophile types reject measurements, mostly because they don't understand the science. But okay, I'll give them a pass. So then we use a double-blind test. The key is you don't do one test with one person, because randomly guessing has a 50-50 chance of a positive (or negative) result. So instead we run many tests on many of people, using statistical sgnificance to tell if anyone can reliably hear the difference. After performing hundreds of tests on dozens of subjects - all experienced listeners, I might add - it was proven that high-res audio is no better than standard CD audio. At least for a delivery medium which is all they tested.

 

Speaking of Meyer and Moran, we're all still waiting for that "proof" you promised to post, oh, about a year ago! :D

 

--Ethan

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...