Jump to content

High Resolution Audio Defined!


Recommended Posts

  • CMS Author

So now we know!

 

From a press release from the Consumer Elextronics Association:

 

"LOS ANGELES (June 12, 2014) – DEG: The Digital Entertainment Group, in cooperation with the Consumer Electronics Association (CEA)® and The Recording Academy®, announced today the results of their efforts to create a formal definition for High Resolution Audio, in partnership with Sony Music Entertainment, Universal Music Group and Warner Music Group.

 

The definition is accompanied by a series of descriptors for the Master Quality Recordings that are used to produce the hi-res files available to digital music retailers. These can be used on a voluntary basis to provide the latest and most accurate information to consumers.

 

The descriptors for the Master Quality Recording categories are as follows:

 

MQ-P - From a PCM master source 48 kHz/20 bit or higher; (typically 96/24 or 192/24 content)

 

MQ-A - From an analog master source

 

MQ-C - From a CD master source (44.1 kHz/16 bit content)

 

MQ-D - From a DSD/DSF master source (typically 2.8 or 5.6 MHz content)

 

To further expand the High Resolution Audio initiative, The Recording Academy, the DEG and the CEA are sponsoring a special High Resolution Audio Listening Experience event, which will be held at Jungle City Studios in New York on Tuesday, June 24 from 6PM to 9PM during CE Week."

 

So, dig out your old 4-track Portastudio masters, mix them to cassette, and call them "High Resolution MQ-A." Maybe there's more to this as they didn't specify the delivery format, only the source. Is a 64 kbps MP3 stream of a DSD recording still MQ-D? Remember when CDs were AAD, ADD, and DDD? How long did that last?

 

Link to comment
Share on other sites

  • Members

I just figured something out -- something that might actually just be sort of significant...

 

If there's a push to 20 or 24 bit files, hopefully we'd see a push to include at least 48/24-capable pathways in consumer devices.

 

Now, that won't help the crappy analog electronics and speakers in these things, of course, but it would presumably at least elevate currently 16 bit consumer digital electronics above the problem of [digital] volume control crap-out, where turning the level down from the digital side means the signal falls into the digital noise floor more readily.

 

Now, certainly, I understand that signal exists under the digital noise floor, but it's not necessarily pretty down there -- which can get to be an issue when you plug such a device into a really loud playback system and then find yourself having to cut level from inside the digital pathway by 50 or 60 dB. 60 dB off ~140 leaves you with pretty good signal. 60 off the ~90 dB SNR of 16 bit takes you back to the 1950's.

 

I have a pair of 200 w/ch powered monitors. Even with their 20 dB input pads fully engaged, I have to lower my signal by 25-30 dB or more just to keep it at a comfortable level with well mastered, unsquashed material. I elect to do it with an analog mixer (really just using it as an active volume control), which, despite added noise, allows me to put a relatively healthy signal out the DAC -- at the typical mixer level I'm still invoking as much as 20 dB of attenuation, but that's still keeping the digital signal path SNR well above that of the analog channel.

 

 

(Of course, there's been nothing to stop consumer electronics companies from upping their digital pathways to 20 or 24 bit -- and some may well use 18 internally already -- independent of the bit depth of whatever the 'standard format' of the moment is -- and that, 'unilaterally,' would help ease the device past the 'shrinking SNR' issue implicit with digital 'volume' control.)

Link to comment
Share on other sites

  • Members
I just figured something out -- something that might actually just be sort of significant...

 

If there's a push to 20 or 24 bit files, hopefully we'd see a push to include at least 48/24-capable pathways in consumer devices.

 

I know I mentioned this before but Computer Manufacturers like HP have been including 24/96 cards in their computer for several years now. The hardware capability is there. The question really is will anyone bother taking advantage of the capability.

Link to comment
Share on other sites

  • CMS Author

it would presumably at least elevate currently 16 bit consumer digital electronics above the problem of [digital] volume control crap-out, where turning the level down from the digital side means the signal falls into the digital noise floor more readily.

 

I didn't think anyone used the volume control any more. Isn't that why hey keep making music louder and louder? So you won't have to turn the volume up for a "quiet" song? ;)

 

Now, certainly, I understand that signal exists under the digital noise floor, but it's not necessarily pretty down there -- which can get to be an issue when you plug such a device into a really loud playback system and then find yourself having to cut level from inside the digital pathway by 50 or 60 dB.

 

This is a system problem - the "really loud playback system" isn't set up correctly. Most of us understand not to connect a line level source to a mic input. When we have to do that, we should be attenuating the analog signal. In a pinch (when you don't know what you'll be connecting to until you get there) you do what you can to make it work. I gave kudos to a mixer I reviewed a while back for setting the gain of the "tape" input (RCA) jacks closer to 0 dBu than the nearly universal -10 dBu. The typical assumption is that what you'll be connecting to those inputs is a "consumer" device with a fairly low maximum output level. But today what's most likely to be connected is an iPod's headphone output, which is nominally several dB "hotter" than a 20 year old CD or cassette player.

 

I have a pair of 200 w/ch powered monitors. Even with their 20 dB input pads fully engaged, I have to lower my signal by 25-30 dB or more just to keep it at a comfortable level with well mastered, unsquashed material.

 

That sounds like kind of an unusual case, which calls for another 20 dB pad in line between those speakers and your source. I've encountered a similar problem in the other direction. The maximum input level of my Korg MR-1000 recorder, even in the low gain setting, is +18 dBu. Even though you can turn the record level control down to get the meters on scale, if what's hitting the input peaks above +18 dBu, the input stage, which is ahead of the record level control, will clip. I have a pair of cables into which I've built a 10 dB pad that I carry when taking the recorder out to record from a PA console where I don't have control of the source level. I also have a pot in a box that's a little clumsier than cables but gives me more flexibility.

 

Link to comment
Share on other sites

  • Members

Well, most powered monitors aren't so over-powered, for sure. Frankly, I have no idea what the boys and girls at Event were thinking when they made the 20/20bas. I firmly believe 120 W/speaker would have been sufficient for most folks. (As it is, they're bi-amped with 135 w to the woofer and 65 to the tweeter.) I guess they were thinking that many smaller studios don't have client-impressing soffit-mounted monster monitors and so they should make these things ridiculously loud. I've never taken them to the point of obvious distortion and I hope I never do. They are loud.

 

(Event later came out with, as I recall, a 100w/side 'junior' version. They also had a passive version you could drive from conventional 2 channel stereo amps. Not sure if they had connections for biamping; but I think a large part of the appeal of the bi-amped active boxes is the active crossover carefully tuned to the speaker, which is presumably why they're able to deliver frequency response they spec at 38-20k Hz +/- 2 dB. On the downside, like all ported reflex speakers, their damping [lack of resonance, accurate time response] is not nearly as good as with acoustic suspension speakers like the ubiquitous NS10's. That said, if I could only keep my NS10s OR my Event 20/20bas, I'd take the Events every time, even if the Yamahas are worth double what I paid and the Events are probably worth half. biggrin.gif )

 

Of course, I could use a passive monitor controller, but the frequency imbalances that arise from passive attenuation are more a concern than the relatively tiny noise native to the (original) Mackie 1202 I use as their throttle. (Of course, I could get an active controller but, TBH, I've heard so many people going on endlessly about whether/how much their Big Knobs and Central Stations affect the sound that I'm content sticking with the devil I know, which, of course, I already have, and which I'm fairly confident of.)

 

 

But, for sure, if one had more 'reasonably' powered powered speakers, maybe in the 80-100w range and with at least some variable attenuation and was feeding with a 24 bit system, using a control in the 24 bit path, then he should mostly be staying up in a reasonable range with normal listening levels. (And, you know, when you're turning it WAY down, you probably are willing to lose some detail, anyhow, since you're likely doing something else like conversing or such.)

Link to comment
Share on other sites

  • CMS Author
Well' date=' most powered monitors aren't so over-powered, for sure. Frankly, I have no idea what the boys and girls at Event were thinking when they made the 20/20bas. I firmly believe 120 W/speaker would have been sufficient for most folks. (As it is, they're bi-amped with 135 w to the woofer and 65 to the tweeter.) I guess they were thinking that many smaller studios don't have client-impressing soffit-mounted monster monitors and so they should make these things ridiculously loud. I've never taken them to the point of obvious distortion and I hope I never do. They are [i']loud[/i].

 

It's not the watts that make them appear loud in your studio, it's the number of volts going in that's required to get to that power level - in your case, not many. There's a lot of voltage gain. For quite a while, I was using a Hafler DH-120 to drive my (passive) speakers. That amplifier has a fixed input level and when my Soundcraft mixer (or any "standard" mixer, for that matter) was showing 0 VU, I got plenty of volume with the control room monitor level control on the mixer at about 9 o'clock. Anything beyond 12 o'clock would clip the input of the amplifier. I wanted to have more working range on the control room monitor knob, so I just added a 12 dB pad to the input of the amplifier and now I can turn the mixer up to 11 without clipping or driving myself out of the room.

 

That's "gain staging."

 

Of course, I could use a passive monitor controller, but the frequency imbalances that arise from passive attenuation are more a concern than the relatively tiny noise native to the (original) Mackie 1202 I use as their throttle.

 

What frequency imbalances? Nothing has flatter frequency response than a resistor. What you can get with a passive volume control, and really, what's in your Mackie mixer is the same thing, is a slight change in the left/right balance over the rotational range of the pot. Your off-the shelf dual pot just isn't made for really accurate tracking between the two elements. It gets worse if you're building a balanced stereo volume control because then you need four pots and if the two variable resistors on a channel don't track accurately, the common mode rejection will suffer.

 

Many years ago, I wrote an article in Recording Magazine about how to work out a do-it-yourself project, and used a monitor controller as an example. I think I may have "invented" the monitor controller with that article because at the time there was no such thing. The article was about how to define a problem and figure out what it takes to solve it. To solve the tracking issue, I proposed buying a handful of dual pots from Radio Shack (they only cost about $2 at the time) and showed a test setup using a battery and a voltmeter to pick out the one of the batch that had the most accurate tracking between elements.

 

 

Link to comment
Share on other sites

  • Members

Hmm... I have a rule that I generally defer on electronics issues to those who conclusively know more than I do -- which you conclusively do -- but I have to say that I did an hour or two of research on passive volume controls a year or two ago and it was certainly my takeaway that passive attenuation networks result in impedance shifts across their attenuation range that result in tonal imbalance. But I'll get back to you on that -- don't rewrite any manuals on my account. biggrin.gif

 

 

Specifically, I'll be taking a good look at this article from the nice folks at Benchmark: http://forum.benchmarkmedia.com/disc...l-technologies

 

Here's what they write in the intro to their section on passive volume control:

 

Passive Attenuator

 

A passive attenuator is simply a resistor network or potentiometer creating a voltage divider in the signal path. The output of a voltage divider is a scaled version of the input signal. A passive attenuator uses only ‘passive’ components, which are components that do not require a power source.

 

Passive attenuators have a reputation of being completely benign. However, a poorly designed passive attenuator can be detrimental to the quality of the audio. Passive attenuators can add noise and distortion, and they often change the frequency response of the system. Passive attenuators with high impedance (greater then 500 ohms) are particularly problematic.

[bold added]

 

That 'poorly designed' is obviously a potentially very important qualifier. ;)

 

 

EDIT: I'd recommend that Benchmark whitepaper on volume control to anyone hazy on volume control issues since it explains them well at a level most reasonably experienced recordists should get right away. The section titled, Overview of Volume Control Implementations, does a much better job of exploring the digital volume control issues I touched on.

 

In addition to dynamic-range limitations, one should also be concerned about distortion induced by an inferior DSP algorithm. If the designer does not implement proper dithering, severe non-harmonic distortion will occur. Many computer playback systems lack dither. 16-bit systems have noticeable distortion when dither is omitted. 24-bit and 32-bit systems are much more forgiving when dither is omitted.

Of course, we all carefully apply dither in our DAWs (or more likely our DAWs are set up to do it automatically at our discretion) when performing DSP or truncating bit depth of signals.

 

But if the digital volume control in our computers' media playback systems, our phones, and various consumer playback devices do not properly apply dither, on top of the reduced SNR, we not only get greater alias distortion -- but our signal is that much more 'submerged' in that distortion and noise floor and, consequently, such distortion becomes that much more noticeable.

Link to comment
Share on other sites

  • CMS Author

I suppose that a passive attenuator could cause a high frequency loss in a poorly designed system - assuming that the system included the cable between the attenuator output and destination input.

 

Given that a typical line output from a solid state device has a source impedance of 50 to 100 ohms, you don't want to load it with anything less than ten times that impedance. So if your attenuator consists of a 1kΩ shunt resistance and you want 10 dB of attenuation, that means your attenuator will consist of two resistors, about 700 ohms and 300 ohms, and you'll take the output to the next device across that 300 ohm resistor. So the source impedance is now 300Ω instead of 50 or 100Ω.

 

That's not too bad, but let's say you use a 100kΩ pot so you can have a volume control. Now, for 10 dB of attenuation, you have a source impedance of 30kΩ. That's getting into the range where the capacitance of the cable can start making it act like a high-cut filter if you have a long enough cable. And since the source impedance varies as you move the slider on the pot, the roll-off frequency of the filter you've made by adding a piece of cable can change. Surely you've heard the arguments about using high-juju guitar cables when you want to stand more than a few feet from your amplkifier (or, from companies who make such cables, even if you want to sit on your amplifier while you play).

 

So, like most anything else electrical, the effect is a matter of degrees, and it can be measured. Benchmark has products to sell that cost more than a Radio Shack pot or a couple of fixed resistors. You need to decide if the improvement is really significant enough to be worth the cost. For some it is, for the rest of us, it usually isn't.

 

.

Link to comment
Share on other sites

  • Members

Well, I only used the Benchmark page because it was pretty well laid out and had a good overview -- but I've read about this issue a fair amount in materials going back years. This info and the conclusions are hardly unique to Benchmark. But I'll see what more I can come up with.

Link to comment
Share on other sites

  • CMS Author

Why don't you just try it and see what it does to your system? We've used passive attenuators much longer than we've agonized over high resolution audio. Probably all of your favorite recordings have them somewhere in the production chain.

Link to comment
Share on other sites

  • Members

Well, like I said, I'm already using an active control, so I have no problem to be solved. With regard to any questions I have about whether or not frequency response linearity can be affected by a passive volume control network inserted at line level, I really didn't have any question before your first post on the issue, based on my reading as well as at least a little hands-on experience. (That said, obviously you have to have some gain-staging flexibility to properly A/B at the same listening level, and it can be risky generalizing from limited experience with possibly far-from-exemplary devices.) But, like I said, by rule I defer to people I know have greater knowledge and expertise in the field of discussion. ;)

Link to comment
Share on other sites

  • Members
Now' date=' that won't help the crappy analog electronics and speakers in these things, of course, but it would presumably at least elevate currently 16 bit consumer digital electronics above the problem of [digital'] volume control crap-out, where turning the level down from the digital side means the signal falls into the digital noise floor more readily.
Thanks for posting the only valid criticism I've seen of 16-bit systems. (I'm only discussing systems where the audio is audio-in, audio-out, not heavy FX and mixing applications.) For my mind, 16/44 sounds as good as anything, and nobody has made a strong case showing that it's not true. But this is definitely an issue in any system where the master volume control is digital rather than analog! Even my mud-standard ears hear this quite clearly, and it reminds me of when I was doing 12-bit sampling and how the tails sounded. Or of my trusty old Ensoniq MR76 keyboard, which I really like but which has a digital master volume, sadly.

 

Of course, most modern low-cost gear is this way. (I did have some laptops where I could tell that the volume control was analog, oddly, but I got out of the habit of using them at anything below nearly full tilt long ago.) So, it's a relevant point.

 

I'm confident that MIke is correct about passive level controllers. After all (unless things have changed radically), an active volume control" is just a passive one followed by a gain stage, in virtually all mixers and consumer audio gear. No doubt there are significant exceptions, such as in a mic preamp, and possibly in super hi-end preamps like a Mark Levinson.

 

Is a 64 kbps MP3 stream of a DSD recording still MQ-D?
I'm confident that the "source" would be the weakest link prior to hitting the hi-res digital form' date=' so sorry, no soap.[/color']
Link to comment
Share on other sites

  • CMS Author

There are many audio engineers who find 2x sample rates better sounding than 44.1 kHz, and, at least while a project is in process (before delivery) everyone prefers 24-bit over 16-bit. But with the exception of the golden eared wealthy audiophiles (there are some of each, but very few of both) a well recorded 16/44 CD sounds just fine.

 

The thing is that that by the time the project gets through mastering, the audio has been so compromised - to sound loud, to be heard over home, office, or automotive ambient noise, or just to sound "contemporary" - that there's a lot to criticize when discussing the perceived "sound quality,"

 

I think that the industry, from a marketing standpoint, needs a fresh start; a new product to sell - and that's "High resolution audio." Maybe this go-around they'll do it right, Or better yet, take advantage of the other stuff that can travel along with the digital audio stream that will allow the listener to intelligently control what happens (or doesn't) to the audio at the listening end. For example, a car radio or a portable music player can include processing similar to contemporary mastering, but with a button to turn it off when you're not driving on the freeway or in city traffic, or you have your $400 headphones plugged into your iPod. Given a new branding name gives them the manufacturers a way to charge more for a product that isn't built in the cheapest way possible.

 

High Definition TV is barely here and now there's all the hoopla about 4K. But there's very little 4K media because there are very few 4K TV sets, because there's very little 4K media.

 

As far as volume controls go, there's more than one way to make a digitally controlled one. There are digitally controlled analog attenuators which can go on the output of an analog output stage that has enough level and headroom to fully drive whatever it's intended to drive.

 

And remember, everyone who has ever used a DAW mixer has used a digital volume control - lots of them, actually. The difference is that the arithmetic is done with 32-bit or longer words so that you can attenuate by a pretty large amount digitally and still have all the bits you want at the final output. But all of this costs more than an attenuator with digital input and output.

Link to comment
Share on other sites

  • Members

I'm all for, say, upping 'standard' container formats to 20 bit. I'm a classical fan -- while it's unlikely I'd want to recreate the dynamic range of the concert hall (I'm thinking of a cello concerto I saw some months back where it was just the cello and the bass drum -- I'm pretty sure that would have exceeded the ~90 dB of CD between the audible detail of the solo cello and the canon-like 'shock' waves of the bass drum) I think there's merit in having a format that really can hold 'any' reasonable musical performance without compromise -- but I have to say that I'd have to see some solid evidence of people actually being able to tell properly transcribed 44.1 from a modern, professional ADC from 48 or 96. I know folks often say they do, but I've yet to see any supporting evidence from proper double-blind testing.

 

Now, don't get me wrong, I ALWAYS thought they should have gone with 48 kHz. But that horse is already out the gate -- there's a huge body of existing recordings at 44.1 kHz, including most folks' existing digital collections. And switching SR at the converter is a pain -- but allowing the OS to do SRC-on-the-fly is a for-sure crapfest, seems to me.

 

And, really, the move to 96/24? Some people like it for production because they feel that some of their plugins perform better at double rates -- but the main reason that would be the case would be with plugins that perform DSP but don't anti-alias filter after (which, oddly, is apparently not unheard of -- but is clearly bad design, seems to me).

 

And we know from Meyers-Moran, of course, that anyone who actually can tell 96 from 44.1 kHz SR's is going to be very, very rare -- if such a person exists at all.

Link to comment
Share on other sites

  • Members

 

I think that the industry, from a marketing standpoint, needs a fresh start; a new product to sell - and that's "High resolution audio."

 

I don't believe that will work. In order to sell something, you have to have something in which the market can sense a tangible difference. CDs were a big hit because there were big differences (even if not necessarily actual improvements) in the sound quality over LPs and tapes----no ticks, pops or hiss---and that alone allowed people to at least perceive an improved audio quality. Plus the convenience factor was much superior as well. And the little silver discs were cool.

 

HDTV was an easy sell because the visual quality over SDTV was easily apparent to anyone and flat-screen TVs were way cool and way overdue.

 

But who are you going to sell "high resolution audio" to? In what format? The average person is more than happy with the sound of CDs and MP3s. Play them HRA and they'll just shrug their shoulders. And is there some new cool delivery format ala CDs or flatscreen TVs to sell?

 

While I haven't seen it yet, so maybe I shouldn't judge, but I suspect 4K TV will fail as well for the same reasons. How many people will see an appreciable difference in the video quality? I suspect it MUST be at least somewhat better, but the HDTV available now is pretty darn good. And are the sets anything new or special?

 

HDTV JUST finally got to the point where it is affordable and available to most people (although very little is broadcast in even 1080. Most is still in 720.) Do they really think they are going to get everyone to go out and buy new, expensive sets so soon AGAIN???

 

I still remember very well the first time I played "Dark Side Of The Moon" on CD and it was DEAD QUIET in the quiet spots all the way to fading heatbeat at the very end. And I still remember very well the first time I walked into a store with an HDTV playing "Planet Earth" and just marveling at how clear and detailed the picture was.

 

If High Resolution Audio can do something similar for the average Joe, then it's something that can be marketed. Otherwise, it's doomed to fail beyond a niche audience.

 

Link to comment
Share on other sites

  • CMS Author

In order to sell something, you have to have something in which the market can sense a tangible difference. CDs were a big hit because there were big differences (even if not necessarily actual improvements) in the sound quality over LPs and tapes----no ticks, pops or hiss---and that alone allowed people to at least perceive an improved audio quality. Plus the convenience factor was much superior as well. And the little silver discs were cool.

 

The thing is that people aren't going to be comparing "High definitoon audio" with CDs, they're going to be comparing it with their MP3 downloads. Remember . . . nobody plays CDs any more. Besides, they don't need to actually hear a tangible difference in order to believe that there is one. Home listeners aren't going to perform controlled ABX tests to determine if they should buy a new player, they're going to buy the new player because:

 

(a) For a while it'll be the latest thing and they've gotta have it

or

(b) That's all they'll be able to buy after a short while

 

When was the last time you saw a CD player at your local Best Buy? They're all DVD players now - and that's something with which, right now, you could play a "high resolution CD" if there were any made. The weakness there is going to be in the linearity and headroom of the analog output section.

 

HDTV was an easy sell because the visual quality over SDTV was easily apparent to anyone and flat-screen TVs were way cool and way overdue.

 

Most importantly, there was HDTV broadcast from the get-go. You might have had to upgrade to digital cable service in order to get it from that spigot, but there's been over-the-air HDTV about as long as HDTV sets have been on the market at a sensible price. Like the transition from CD to DVD players, it's probably hard to find a TV set that doesn't do some form of HD.

But who are you going to sell "high resolution audio" to? In what format? The average person is more than happy with the sound of CDs and MP3s. Play them HRA and they'll just shrug their shoulders. And is there some new cool delivery format ala CDs or flatscreen TVs to sell?

 

You'll have to ask Neil Young that one. He seems to think it's really important to the listeners. I don't know, but I think that is's going to be an (a) or (b) thing as above. If this is all the industry is going to build, then if you want to play music, you'll have to get an HD player or settle for those crummy old MP3 downloads. Maybe not next year, but likely in five years.

 

Link to comment
Share on other sites

  • Members

Before I go off on a tangent here, let me just throw something out there. Back when I bought my PCM-F1 while touring Japan in the early 80s, the folks at Sony were kind enough to add in this thick little paperback on the basics of digital audio. If my memory serves me correctly, (not a sure bet, I'm afraid) didn't the 16bit/44.1KHz "standard" come about because one of the big muckety-mucks at Sony wanted to fit at least an hour onto a single CD? I seem to remember something about a particular classical piece that was almost an hour long, but like I said, my synapses don't fire like they used to. To be more clear, am I remembering correctly that the decision for the first "standard" was not purely made from a sonic perspective? I'm not bashing the concept of engineering compromises... the march of technology is filled with such examples. (like how they added the color information to black and white televisions) To another's point on this thread about how good 16/44 sounds to everyone but the golden-eared few, who is the "audience" for this new standard? After twenty plus years in high SPL fields, it certainly isn't me... I feel fortunate just to hear my phone ring. ;)

 

In reference Mike's original post with regard to the SPARS code, I tried to post this in response to Dr. Craig's wonderful dissertation on Digital Audio Basics in the previous Music Gear Weekly. (apparently Comments were disallowed) This thread may not be the best place to post it, but at least it's somewhat in the ballpark.

 

In the early days of purchasing CDs, we paid a lot of attention to the old SPARs code which - on the early releases - was almost always AAD. The first direct-to-digital CD I purchased was Tom Jung's DMP label 1983 release of Flim & the BBs "Tricycle"... I used it to help tune PA systems as the sonics & dynamics were amazing for the time. However, my first true DDD CD was Dire Strait's "Brothers In Arms". And actually, I never thought I'd see vinyl records again… for quite a few years, that was the case.

 

Now I'm seeing an increasing number of new releases coming out with a vinyl option. I guess I can understand the pure ideological argument for a return to AAA... back in the late 70s, I operated the venerable 24 track Studer A80 which was a wonderful machine. However, I'm not sure I understand the reasoning behind what could effectively be called a DDA… unless the new vinyl releases are simply an attempt to reach that demographic of analogists who refuse to listen to anything NOT on vinyl. But then, wouldn't those individuals be just as opposed to the inclusion of ANY digital conversion in the chain? Oh... and will these new "standards" be acceptable to the analogists?

 

Sorry… my mind wanders. ;) In any case... great article, Dr. Anderton!

Link to comment
Share on other sites

  • Members

 

The thing is that people aren't going to be comparing "High definitoon audio" with CDs, they're going to be comparing it with their MP3 downloads. Remember . . . nobody plays CDs any more. Besides, they don't need to actually hear a tangible difference in order to believe that there is one. Home listeners aren't going to perform controlled ABX tests to determine if they should buy a new player, they're going to buy the new player because:

 

(a) For a while it'll be the latest thing and they've gotta have it

or

(b) That's all they'll be able to buy after a short while

 

When was the last time you saw a CD player at your local Best Buy? They're all DVD players now - and that's something with which, right now, you could play a "high resolution CD" if there were any made. The weakness there is going to be in the linearity and headroom of the analog output section.

 

I think you over(under?)estimate the general public's ability to be fooled into believing something is "better" when they can't actually perceive it. The trick to selling snake oil is that people have to desire the result so much that they don't care if it actually works for them or not. Weight-loss miracle cures sell because so many people desperately want a quick fix to losing weight. There is no clamoring market for better-quality audio. Real or imagined. I don't anybody beyond a few audiophile types who complain that the sound they get from their MP3 player sucks, do you? Or marveling at how much better their neighbors audio system sounds.

 

 

 

Most importantly, there was HDTV broadcast from the get-go. You might have had to upgrade to digital cable service in order to get it from that spigot, but there's been over-the-air HDTV about as long as HDTV sets have been on the market at a sensible price. Like the transition from CD to DVD players, it's probably hard to find a TV set that doesn't do some form of HD.

 

Product availability isn't the issue. That would exist if there was any sort of real demand for the product. 3D TV is failing not due to a lack of product, but because nobody really wants it. And that's something that IS tangibly different from HDTV. The history of audio and video is littered with failed "build it and they will come" ideas. What works isn't simply having a better mousetrap---regardless of how much cheese is available to fill it. What works is people having a real desire for better mousetraps.

 

 

 

You'll have to ask Neil Young that one. He seems to think it's really important to the listeners. I don't know, but I think that is's going to be an (a) or (b) thing as above. If this is all the industry is going to build, then if you want to play music, you'll have to get an HD player or settle for those crummy old MP3 downloads. Maybe not next year, but likely in five years.

 

If people can't hear a real difference, why are they going think the MP3 downloads are crummy? Again, this isn't CDs vs. Close-n-Play. Or HDTV vs SDTV. Without a marked upgrade in quality that virtually everyone can notice and a sizeable market can get excited about, I just don't see it happening.

 

But just my take.

Link to comment
Share on other sites

  • CMS Author
If my memory serves me correctly' date=' (not a sure bet, I'm afraid) didn't the 16bit/44.1KHz "standard" come about because one of the big muckety-mucks at Sony wanted to fit at least an hour onto a single CD? I seem to remember something about a particular classical piece that was almost an hour long, but like I said, my synapses don't fire like they used to. To be more clear, am I remembering correctly that the decision for the first "standard" was not purely made from a sonic perspective? [/quote']

 

Your memory is pretty good, but the facts aren't quite correct. The story that was going around was that Mrs. Leopold Stokowsk (if I remember the good story correctly) expressed an interest in the new upcoming format being able to play Leo's complete recording of Beethoven's 6th Symphony uninterrupted. The real story was that the 16-bit A/D converters of the day could, on a good day, do about 13 bits accurately. The rest was noise so there wasn't any point in trying to go any higher. EIAJ came up with a 14-bit format that got all the useful data out of the converter and left two bits for error checking and correction. You had the option of turning off the error correction and using the last two bits for audio (there was a switch on some units).

 

The 44.1 kHz sample rate came from the fact that the early commercial PCM digital recorders used video recorders (both professional U-Matic and home Beta VCRs) as the storage medium, and 44.1 kHz sample rate worked out so that you could use most or all of a video frame to record an integral number of samples - it made it easier to decode what was coming back from the VCR. And as far as the nominal (original) 54 minutes (I think), that was enough for Beethoven and it was about all they were able to write on a disk with the laser technology of the day. So the engineers were happy, Mrs. Stokowski was happy, the customers were happy because they could get more music on a CD than on a phonograph record, music retailers were happy because the new longer CDs sold for more than LPs and they made a few more bucks on each sale, The music publishers were happy because more songs per unit purchase meant more royalties.

 

While the initial digital recordings, along with the benefits, had some sonic quirks that bothered a few people, but in general the format was accepted because, to the consumer, it was all advantages and no disadvantages. Mostly what people were impressed with was not higher (and in some respects, lower) fidelity, but longer playing time, not having to get off the couch and turn over the record after 20 minutes, no issues with wear, and it took up less shelf space. About the only complaint from anyone but audio purists (and wannabes) was that the jacket was too small and often liner notes were sacrificed in the manufacturing process.

 

When studios started recording with (closer to) 24-bit resolution, they noticed an improvement in what they heard in the studio, and that's where the "I wish the record buying public could all hear it this way" thing started. Of course the contemporary mastering process made that kind of a moot point, but that's a different story.

 

In the early days of purchasing CDs, we paid a lot of attention to the old SPARs code which - on the early releases - was almost always AAD. The first direct-to-digital CD I purchased was Tom Jung's DMP label 1983 release of Flim & the BBs "Tricycle"... I used it to help tune PA systems as the sonics & dynamics were amazing for the time. However, my first true DDD CD was Dire Strait's "Brothers In Arms". And actually, I never thought I'd see vinyl records again… for quite a few years, that was the case.

 

I said I'd buy a CD player when someone gave me a CD. A friend who had a studio in the Bay Area came to visit (I think it was pre-AES show in NY) and brought me a copy of the first CD that he made, so we went out together to the local appliance store and bought me a Phillips CD player. I think it cost about $150. I still have it., It still works. It even has a S/PDIF digital output. The first all digital CD I bought was a Joe Cocker live concert. I'm not much of a Joe Cocker fan but I was curious to hear what a recording that (supposedly) never touch tape sounded like. I was still buying LPs because I was listening to music that hadn't caught on to the CD craze yet. In fact, back in the 1990s, I recorded a local singer who came from Sierra Leone (appropriate for today since his 16-minute song was about a soccer team back home) who wasn't interested in issuing it on CD or vinyl. He wanted cassettes because that's all the people back home had at the time.

 

Now I'm seeing an increasing number of new releases coming out with a vinyl option. I guess I can understand the pure ideological argument for a return to AAA.

 

It's a fad, and fads do have a tendency to catch on. But I don't think anything is, or ever will be true AAA. Everything goes through a digital system for editing, and maybe, just maybe, a little bit more before it goes to the cutting lathe.

 

.. back in the late 70s, I operated the venerable 24 track Studer A80 which was a wonderful machine. However, I'm not sure I understand the reasoning behind what could effectively be called a DDA… unless the new vinyl releases are simply an attempt to reach that demographic of analogists who refuse to listen to anything NOT on vinyl.

 

DDA meant recorded digitally, mixed digitally, but mixed to analog tape. A lot of studios were sending the mixdown to both a digital and an analog recorder, and cutting the CD master from the one that they liked better. Sometimes the analog one was used, sometimes the digital one.

 

Link to comment
Share on other sites

  • CMS Author

 

I think you over(under?)estimate the general public's ability to be fooled into believing something is "better" when they can't actually perceive it. The trick to selling snake oil is that people have to desire the result so much that they don't care if it actually works for them or not. . . . There is no clamoring market for better-quality audio. Real or imagined. I don't anybody beyond a few audiophile types who complain that the sound they get from their MP3 player sucks, do you? Or marveling at how much better their neighbors audio system sounds.

 

In general, I agree, but there are some people with a lot of money and a lot of influence in the music business who are trying really hard to create that demand. And I think they're doing a pretty good job of it. Nowadays anyone content with making a computer an integral part of their music listening system (even if it's an iPhone and a pair of $5 ear buds) can play a high bit rate AAC or FLAC file. And there are music delivery services that are now offering 256k or 320k MP3s, usually for a premium price. Some may actually be listening on a playback system that, with a high resolution source, sounds better than the MP3s that they've been downloading. But those who don't have the capability hear what both the audiophiles and the music producers are saying. And when it becomes cheap and practical enough, they'll buy in. They'll definitely buy in when CDs go away and all downloads are of better resolution than we have today.

 

Remember that we got low bit rate MP3s because the earliest portable players had only enough memory for a couple of songs without applying heavy data destruction. And when downloading came along, many people were still on 2400 baud modems. Back in the modem days (but yet DAT days), a local newscaster asked me what he would need in order to produce features at home or in the field and send them back to the studio digitally so it didn't suffer the quality loss of going via an analog dial-up phone call.. At the time I worked out that it would take several hours to transmit a five minute story. He stuck with analog.

 

Today, with a only a small portion of the population still using low speed Internet, it's not unreasonable to use the Internet to deliver at least "CD quality" files. So why don't they stop there? Because they can go further, and there's an opportunity to sell more new equipment.

Product availability isn't the issue. That would exist if there was any sort of real demand for the product.

 

So did quad (4-channel) recordings. Nobody wanted to put all of those speakers in their living room. But today 5.1 (or more) surround sound is enjoying a healthy position. But it's largely connected with video (home theaters) and games, not pure music. Still, the capability is there, and you can play a surround DVD and enjoy the music even if you don't have a fancy TV. But I'll give you that it adds something more than audio fidelity. It must, given how many people listen to surround sound on a "home theater in a box" system that costs $195 for six speakers and a receiver. .

 

3D TV is failing not due to a lack of product, but because nobody really wants it. And that's something that IS tangibly different from HDTV.

 

3D movies never went over very big either, but most major movie theaters have surround sound. 3D TV, like 3D movies, is too much trouble to watch, and they went through the same sort of market confusion as Beta/VHS VCRs, with one TV receiver's 3D glasses not working with another receiver. I've seen "glasses-less" 3D TV and it's not very impressive.

 

If people can't hear a real difference, why are they going think the MP3 downloads are crummy?

 

Because, before they make a choice based on their own listening experience, they will have already been sold that bill of good. How many people go to a shop that has demo facilities (how many of them are there any more anyway?) to buy their living room stereo system? They go to Best Buy and buy what's on sale. I'll bet that if you went into one of those high end audio shops with a couple of CDs and a couple of today's MP3 files, you'd be able to hear the difference in quality when the shop played you a DSD recording coming off a computer or hardware player. But people who go into shops like that don't need to be convinced. And people who shop at Best Buy are already convinced.

 

It's not going to change overnight. CDs had a good 25 year run, the stereo LP maybe twice that. Even cassettes lasted 15 years or more in the marketplace. I'll bet there will be plenty of high resolution music files in distribution 10 years from now, and whether people can hear the difference between then and now (or hear and don't care), that's what's going to be the delivery format.

Link to comment
Share on other sites

  • Members
Before I go off on a tangent here, let me just throw something out there. Back when I bought my PCM-F1 while touring Japan in the early 80s, the folks at Sony were kind enough to add in this thick little paperback on the basics of digital audio. If my memory serves me correctly, (not a sure bet, I'm afraid) didn't the 16bit/44.1KHz "standard" come about because one of the big muckety-mucks at Sony wanted to fit at least an hour onto a single CD? I seem to remember something about a particular classical piece that was almost an hour long, but like I said, my synapses don't fire like they used to. To be more clear, am I remembering correctly that the decision for the first "standard" was not purely made from a sonic perspective? I'm not bashing the concept of engineering compromises... the march of technology is filled with such examples. (like how they added the color information to black and white televisions) To another's point on this thread about how good 16/44 sounds to everyone but the golden-eared few, who is the "audience" for this new standard? After twenty plus years in high SPL fields, it certainly isn't me... I feel fortunate just to hear my phone ring. ;)

 

In reference Mike's original post with regard to the SPARS code, I tried to post this in response to Dr. Craig's wonderful dissertation on Digital Audio Basics in the previous Music Gear Weekly. (apparently Comments were disallowed) This thread may not be the best place to post it, but at least it's somewhat in the ballpark.

 

In the early days of purchasing CDs, we paid a lot of attention to the old SPARs code which - on the early releases - was almost always AAD. The first direct-to-digital CD I purchased was Tom Jung's DMP label 1983 release of Flim & the BBs "Tricycle"... I used it to help tune PA systems as the sonics & dynamics were amazing for the time. However, my first true DDD CD was Dire Strait's "Brothers In Arms". And actually, I never thought I'd see vinyl records again… for quite a few years, that was the case.

 

Now I'm seeing an increasing number of new releases coming out with a vinyl option. I guess I can understand the pure ideological argument for a return to AAA... back in the late 70s, I operated the venerable 24 track Studer A80 which was a wonderful machine. However, I'm not sure I understand the reasoning behind what could effectively be called a DDA… unless the new vinyl releases are simply an attempt to reach that demographic of analogists who refuse to listen to anything NOT on vinyl. But then, wouldn't those individuals be just as opposed to the inclusion of ANY digital conversion in the chain? Oh... and will these new "standards" be acceptable to the analogists?

 

Sorry… my mind wanders. ;) In any case... great article, Dr. Anderton!

Beethoven's 9th Symphony was the work that was seen as a benchmark length. While there are longer important works, of course, it was their goal to finally be able to listen to all of the Ninth without interruption.

 

Who sez the tech guys don't sometimes put music first? wink_zpsa9897a65.gif

 

I think it's pretty clear that the vinyl fad (for most) has zero to do directly with audio quality but, rather, is probably fed by a broad matrix of impulses from anti-commercialism (second hand vinyl shoppers), distrust of the modern, distrust of technologies which require a basic understanding of advanced math in order to understand (Fourier Transform, anyone?) and which manifestly are even beyond the comprehension of many working in the production of music, a desire to get 'back to basics' -- basic future shock reactionism.

 

It's like when all us hippies decided to go live off the land.

 

A lot like that, I should say. biggrin.gif

Link to comment
Share on other sites

  • Members

"EIAJ came up with a 14-bit format that got all the useful data out of the converter and left two bits for error checking and correction. You had the option of turning off the error correction and using the last two bits for audio (there was a switch on some units)."

 

I had to look to be sure, but thought I remembered this... my PCM-F1 has the 14bit/16bit switch, it is labeled RES. (resolution) I used a Canon VR-40A VHS deck with it because of the available stereo HiFi tracks. I would run the FOH main console outputs into the F1 and X-Y audience mics into the HiFi tracks to effectively get 4 relatively high quality tracks for live recordings. The only downside was the HiFi had an inline AGC (no adjustable record levels) so I had zero control over the audience mics recording level... still, with some judicious fader-riding on subsequent mixdowns, we produced some very acceptable and realistic live recordings.

 

"Everything goes through a digital system for editing, and maybe, just maybe, a little bit more before it goes to the cutting lathe."

 

True, except in the case of (speaking of) the recent Neil Young/Jack White collaboration. Although - as you say - such may just be a fad.

 

Thanks for the corrections and educations on the other stuff!

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...