Jump to content

Will this matter?


1001gear

Recommended Posts

  • Members

I started out being a hifi er and at least learned what to buy. I stopped when my classical vinyl sounded "good enough" which I suppose is pretty much on the low end of the audiophile scale. I was never one to marvel at the exquisite detail of crappy tunes recorded and mastered properly and have evolved into listening for content - mostly Youtube and CD audio on my 1.5" computer speakers. I use what used to be my main hifi to practice drums and guitar often only listening for the changes and/or rhythm. So self promoting preface out the way; this:

 

http://www.digitaltrends.com/home-theater/mqa-best-high-resolution-file-format-htc/

 

Do musicians need this? Is there a discernible difference ? Will hearing impaired professionals be able to tell? Does it in fact sound better?

Link to comment
Share on other sites

I started out being a hifi er and at least learned what to buy. I stopped when my classical vinyl sounded "good enough" which I suppose is pretty much on the low end of the audiophile scale. I was never one to marvel at the exquisite detail of crappy tunes recorded and mastered properly and have evolved into listening for content - mostly Youtube and CD audio on my 1.5" computer speakers. I use what used to be my main hifi to practice drums and guitar often only listening for the changes and/or rhythm. So self promoting preface out the way; this:

 

http://www.digitaltrends.com/home-theater/mqa-best-high-resolution-file-format-htc/

 

Do musicians need this? Is there a discernible difference ? Will hearing impaired professionals be able to tell? Does it in fact sound better?

 

All are good questions - but the answers are going to require listening to it in order to find out. :)

Link to comment
Share on other sites

Well you work with hi res masters. What is all this non phase shifted nuance of which they speak? Is home audio twangy by comparison? :D

 

I assume by "home audio" you're referring to 16 bit CDs and standard MP3 files?

 

I've never really noticed a problem with phase issues with CD, but I've heard plenty of low-res MP3s that sounded as swirly and swishy as all get-out.

Link to comment
Share on other sites

I've seen this referenced in other threads and the general vibe seemed to be skepticism' date=' to say the least. We'll see...[/quote']

 

Did you read the article? I think that skepticism is warranted. I certainly would want to get the non-layman's description of what they're doing (the layman's version in the article is unconvincing), and have a chance to hear it before my own would be quelled.

Link to comment
Share on other sites

  • CMS Author

"The tech behind the process is complex, even in laymen’s terms, but the official pitch claims the MQA process first cleans up the distortion and “blurring” that occurs when an analog track is converted into the digital realm, creating a “truly accurate 3D soundstage.” It then uses a compression process MQA playfully calls “musical origami” to “encapsulate” all of the extremely high frequency info — the stuff our ears don’t so much hear on a conscious level, but our bodies feel in a live performance, which can later be decoded by MQA-enabled devices."

 

This sounds like horse manure to me. If you have a bad A/D converter, then by all means get a better one. This is not a problem. The problem is in the playback, because that's what the folks who made the recording that sounds fantastic in the control room can't control.

"All that being said, MQA is the most exciting audio file format I’ve heard since the FLAC file was born. All audio lovers can hope for is that this information will get out and more people will get on board, because MQA truly is a revolutionary format; one that, I hope, will be available everywhere sooner than later."

 

There's nothing exciting about FLAC other than that it's as good as the WAV you started out with, played back over whatever playback system you have, only it takes up less storage space. It doesn't make the original recording sound better than without FLAC. It does sound better than with low bit rate MP3, and probablly a little better than a high bit rate (320 kbps) MP3, particularly if you're FLACing a 24-bit 96 kHz well recorded program.

 

 

"Updated 3/2/2015: This article was updated to include the info that MQA files are delivered via standard file formats such as WAV or FLAC."

 

Huh? So this suggests that what they're delivering is a WAV or FLAC file that comes from a WAV file that's been processed by MQA? Sounds like another miracle cure for digialitis to me.

 

 

 

 

 

 

Link to comment
Share on other sites

  • Members

That was the "miracle" of the format. Master quality could now be streamed comfortably at existing bandwidth. I'm more concerned with the quality than the compression but that's the sell of it. Other articles have dropped all the major names; Tidal, Spotify etc. as well as MFGers like Sony and some others.

Link to comment
Share on other sites

  • Members

The "miracle" of any format is the music it's playing back. Period.

 

Anyone corporate/venture capitalist who doesn't realize that a streaming service isn't the star, the musicians are, is clueless, stupid, and deserves to lose what they invest. The one thing I'll say about Tidal is it's selling the musicians more than the service.

Link to comment
Share on other sites

  • CMS Author

That's not a bad video. It at least sounds like there's been some work done in making a better D/A converter, and that can't be a bad thing. But still, I think that anything better than "CD quality" will be a niche market, and will therefore always be more expensive than what the typical user will pay . . . until it becomes the only option available.

Link to comment
Share on other sites

  • Members

I should have stopped reading the Digital Trends article when I got to this passage...

 

I started with Daft Punk’s Doin It Right, and the MP3 version was as you’d expect: flat and boring, with all the right pieces in place, and none of the excitement of live audio. Then I tried the MQA version, and everything suddenly blossomed.

First, what kind of tin-ear uses Daft Punk for a listening test?

 

Then, the sighted comparison between the 'flat and boring' mp3 -- was that a 320 kbps? A 192? A 128? Maybe 16 kbps/8 bit? We don't know. He ain't gone tell us. The next comparison at least uses something with some real instruments in it, some version of Brubeck's "Take Five." Again, he knows what he's listening to, meaning confirmation bias is at play, one way or the other. And out comes more airhead verbiage: "It simply felt and sounded more real, more dimensional."

 

It gets worse. Cut back to the official BS...

the official pitch claims the MQA process first cleans up the distortion and “blurring” that occurs when an analog track is converted into the digital realm, creating a “truly accurate 3D soundstage.”
Silly me, I was thinking that modern multibit oversampling combined with a contemporary approach to reconstruction filtering had already taken distortion down to remarkably tiny levels.

 

And precisely what kind of distortion is this going to supposedly remove from already recorded material? Plangent and some others have had some success with removing time-domain-related IMD from old tape and disc masters but I'm plenty curious just what 'distortion' these guys are eliminating. NOT to mention HOW... Musical Origami is not exactly a tech description that carries much weight, even with this failed poet.

 

But wait... it gets stupider...

It then uses a compression process MQA playfully calls “musical origami” to “encapsulate” all of the extremely high frequency info — the stuff our ears don’t so much hear on a conscious level, but our bodies feel in a live performance, which can later be decoded by MQA-enabled devices.
Our ears don't hear it on a 'conscious level'... don't think about that one too hard. But if you can't HEAR it, your body isn't likely to respond to it. There was a looooooooong discussion at a competing recording site a few weeks back that went through Oohashi and a couple of more recent attempts at exploring the so-called "Hypersonic Effect" and the believers and skeptics pretty much fell into predictable camps: pro-science, empirical testing on one side and new-age-quantum-nonsense-spewing-want-to-believe types on the other. Many flaws and possible flaws in the testing methodologies of Oohashi and the others were explored at sometimes tedious length. Few minds were changed. Of course.

The company says that, even on regular devices, MQA encoded files play at CD-quality, but on MQA-enabled devices “the full recording is unfolded to deliver the full performance.”

Of course it does. It tricks your regular old equipment into delivering better sound than it was previously capable of so, OF COURSE, with gear specially designed to make use of the technology, it delivers even BETTER sound than the better-than-original you'd get when playing it with... Oh, you get the idea.

 

This stuff really annoys me.

Link to comment
Share on other sites

  • Members

The Wikipedia article on Master Quality Authenticated offers a view inside the sausage factory -- and it ain't so poetic...

 

The basic premise is, akin to XRCD/HDCD/aptX in some ways, to hierarchically compress the relatively little energy in the higher frequency bands into compressed data streams, which are then embedded into the lower frequency bands using proprietary dithering techniques.

 

After a series of such manipulations, the downsampled 44 kHz/16bit data (dithered partially with the last-step data stream), the layered data streams, and a final "touchup" stream (compressed difference between the lossy signal from unpacking all layers and the original) are provided to the playback device. Given the low amount of energy expected in higher frequencies, and using only 1 extra frequency band layer (upper 44 kHz band of 96/24 packed into dither of 48/16) and one touchup stream (compressed difference between original 96/24 and 48/16) are together distributed as a 48/24 stream, of which 48/16 bit-decimated part can be played by normal 48/16 playback equipment.

 

One more difference to standard formats is the sampling process. The audio stream is sampled convolved with a triangle function, and interpolated later during playback. Theory of such sampling is explained in these slides: http://icms.org.uk/downloads/BtG/Dragotti.pdf

 

But it's this passage that really stands out:

Compared to FLAC/ALAC and other lossless formats, there is no factual bandwidth saving, and the 48/16 signal has easily identifiable high-frequency noise in the 3 LSB bits. Based on information available, the fully decoded MQA signal is 352 kHz at 24 bits. Whether it's lossless or "only keeps timing information to remove ringing and echo" remains to be seen.

 

https://en.wikipedia.org/wiki/Master..._Authenticated

 

 

This writer performed a number of analyses of various file formats compared to MQA and the results are interesting, to say the least:

 

http://www.computeraudiophile.com/bl...ires-flac-674/

 

On the basis of his analysis, he's vexed by the suggestion that MQA is any sort of real improvement over FLAC or ALAC. And one of the NON-technical issues he uncovers in the marketing material for MQA is even more interesting...

I get less noise by using my tools to produce 44.1/16 [FLAC] file and the size is smaller 6.2 MB vs 16.7 MB. At tiny size increase to 17.2 MB I can have full normal 176.4/18 FLAC! The optimized 120/18 normal FLAC was 13 MB, 3 MB less than the MQA FLAC. And undoubtedly much better quality and no fancy playback hardware needed!

 

So what the heck is point of MQA!? It doesn't save any bandwidth, it just adds proprietary per-unit licensing royalty cost to the picture.

 

In addition, for content providers:

 

Every MQA encoder will need access to an HSM (Hyper-Security Module) that issues the encrypted signatures contained within each file. Costs of owning and implementing HSM within your environment will generally range between £5,000 - £20,000 but it’s important you discuss this with your technical team and MQA.

Or alternatively you can pay to 7digital to encode your files.

 

Producers cannot encode to MQA files on their own, only "The actual MQA encoding takes place at the encoding house". The files they produce with plugins just add metadata to the file for encoding process.

Link to comment
Share on other sites

  • Members

That's funny stuff. The scheme does make sense though. The reason you get default 44.1 is it's essentially that with the 192k 24 bit part packed subsonically under the noise floor. Without decoding, you get plain CD audio. There's a couple more explanations on youtube I'll go find the official one.

Link to comment
Share on other sites

  • Members

Filter ringing is a (manifestly) much misunderstood phenomenon -- and the pre-ringing issue is one often grabbed onto by those looking for straws to grasp.

 

This forum discussion (linked below) between the 'pro-science,' empirical evidence crowd and the want-to-believe cohort goes into filter ring issues of both IIR and FIR filters and a number of other issues (including the range of human hearing and issues surrounding the non-hearing perception of sound). The discussion of filter resonance starts getting interesting in the 100-200 post range but sane folks will want to do some skimming. Those with a decent technical background can probably fairly quickly figure out who knows what they are talking about...

 

Ultrasonic Effect and what it might tell or not tell us about 96k and Vinyl

Link to comment
Share on other sites

  • Members
Filter ringing is a (manifestly) much misunderstood phenomenon -- and the pre-ringing issue is one often grabbed onto by those looking for straws to grasp.

 

This forum discussion (linked below) between the 'pro-science,' empirical evidence crowd and the want-to-believe cohort goes into filter ring issues of both IIR and FIR filters and a number of other issues (including the range of human hearing and issues surrounding the non-hearing perception of sound). The discussion of filter resonance starts getting interesting in the 100-200 post range but sane folks will want to do some skimming. Those with a decent technical background can probably fairly quickly figure out who knows what they are talking about...

 

Ultrasonic Effect and what it might tell or not tell us about 96k and Vinyl

 

Not that I have the proverbial golden ears - I play drums and electric guitar lol but I've never subscribed to the 20K limit. I think it's maybe if you take a sine wave it'll disappear from your perception somewhere in the teens but now take an orchestra and you have your 'up to 20k' times many instruments. This means random harmonic/phase events at many times 20k and for sure the 20k sampling rate. This is bound to cause 'pixilation' throughout the recorded audio. The best that can happen from this point on is an electronic glossing over of any audible distortion be it by filter or filter by ambience or what ever else they do.

Link to comment
Share on other sites

  • Members

I think you might find these videos really interesting. I'm not normally a fan of video presentation of information (I like reading), but these are very well done, work through explanations in a step by step manner, have smart use of graphics and animation, and stick to solid, provable science.

 

With regard to some 'old-wives tales' about digital audio, they understand that some folks will be highly skeptical, so in the second vid they use high end analog measuring gear most of the way through (for one case, analog gear simply doesn't offer the range and resolution necessary but otherwise, it's all analog measurement).

 

A Digital Primer for Geeks is pretty much what it sounds like, a fast moving overview -- but it has a nice basic explanation of how sampling audio works is and why it can return a continuous analog copy of a bandlimited analog input signal. (About half way through the 30 min. vid, you may want to jump to the second vid, below, since things get pretty geeky and jump to digital video and media containers.)

 

first half is audio-oriented - has explainer on how digital audio works

[video=youtube;FG9jemV1T7I]

 

 

The Digital Show & Tell goes into audio issues a little farther and does some myth-busting using an analog signal generator and scope...

 

all audio -- explores details, busts some myths

[video=youtube;cIQ9IXSUzuM]

 

Here's the support page for the Digital Show & Tell video tutorial above, links to downloadable versions in various resolutions, subtitle files for a number of languages, and a link to the page for first video presentation, A Digital Media Primer for Geeks.

Link to comment
Share on other sites

  • Members

I've seen those and more or less agree. I'm not arguing D versus A instruments. The issues here are Does 24 bit sound better? and Can you improve on it?

Phil O'keefe says he can tell the difference. I've never had good enough monitoring to venture a guess. Means so far, nope. And anyway Studios are all about the higher resolutions. Is it only because they can?

Link to comment
Share on other sites

  • Members
I've seen those and more or less agree. I'm not arguing D versus A instruments. The issues here are Does 24 bit sound better? and Can you improve on it?

Phil O'keefe says he can tell the difference. I've never had good enough monitoring to venture a guess. Means so far, nope. And anyway Studios are all about the higher resolutions. Is it only because they can?

With regard strictly to the 24 bit issue -- it's all about signal to noise ratio, obviously. 16 bits gets you to a potential of something a little over 90 dB SNR. But that's for properly mastered material that takes full use of the dynamic space. If we take a solo track something we've recorded at a 'moderate' safe level (a lot of folks like to keep their peaks under -12 dB when tracking so as to be assured that none will reach 0 dB -- with many keeping their 'average' level down around -18 dB) and play that track in isolation -- particularly if we focus on reverb tails and or note fades and especially if we turn the playback volume up, at some point with a 16 bit signal we WILL hear that -90 dB noise floor.

 

And that is why almost everyone recommends tracking with 24 bit converters into a DAW format that will accommodate it fully (most use 32 or 64 bit floating point word length formats).

 

But the primary difference is going to be the noise floor. Adding extra bits or reducing the number merely lowers or raises the noise floor, as demonstrated in the vid. Of course, as delicate parts of the signal sink below the noise floor (and signal does 'continue' beneath the noise floor) it can reduce our ability to accurately perceive them. But this happens at a very low level vis a vis digital zero. In a properly mastered mix of conventional music, the only time you would ever be able hear this phenom would be by turning the volume way up in a really, really quiet section, reverb or fade tail.

 

By the way, the dynamic range of the human ear in optimal shape is only around 90 dB -- at any moment -- but because of the muscles and mechanisms of the ear, that range is a sort of 'sliding' range (I'd say it's dynamic except for the obvious linguistic collision ;) ). When we're in a noisy environment, the ear tries to protect itself by reducing its overall sensitivity. (Which is why you can just barely make out your GF/wife as she shouts in your ear at the Judas Priest re-re-reunion.) It's just a little like the floating point range of a modern DAW. Kinda, sorta.

 

 

With regard to frequency content above the limits of human hearing... Can it affect what we hear? Perhaps in combination with other super-sonic content by subtractive recombination? In a capture and reproduction system that is linear in response, such signal components will remain intact and discrete -- and so above the limits of hearing. But if there are nonlinearities, they can produce intermodulation distortion (IMD) that may produce generally unwanted artifacts in the audible range -- and, in fact, that is one very good reason why grooved record mastering traditionally filters out this supersonic content -- and, in fact, even audible (to some) high frequencies were typically rolled off over 16 kHz to minimize such IMD distortion in the messy electromechanical process.

Link to comment
Share on other sites

  • Members
What of the 192k sampling rate then?

There can be good reasons for upsampling a lower sample rate to double or quad rate for digital signal processing and then downsampling to a conventional distribution format. Our DSP tools work better with more samples.

 

And, in fact, modern multibit oversampling converters (which many/most of us professionals have) use a variation on this approach, upsampling signal and interpolating intermediary sample values in order to raise the Nyquist frequency and so reduce the difficulty of the task of the output filter.

 

But does recording or distributing at a so-called 'quad rate' offer any benefits? HD enthusiasts and audiophiles will argue yes. However, if one talks to some of the designers of our best equipment like Dan Lavry or many others, one will run into high levels of skepticism on this front.

 

Many folks were unhappy with the original Redbook spec of 44.1 kHz sample rate because it left a very small 'space' for antialias filtration roll off. It's not that the scientifically oriented among that cohort were arguing that a 20 kHz frequency b/w wasn't enough -- rather that a higher sampling rate would have allowed better and/or cheaper filtering [at, of course, the 'cost' of reduced capacity -- a very big deal with the optical disk tech of 35 years ago!])

 

But that is precisely why contemporary multibit oversampling converters have become so dominant -- they offer a way to greatly improve antialias filtering even with a 'basic' 44.1 kHz storage format.

Link to comment
Share on other sites

  • Members

This is all very interesting. Digital audio is still new, and we have much yet to learn.

 

For example, I was asked to be the voice of reason at a New Music Seminar panel discussion on high-res audio two years ago. I was recording some examples to prove that no one could tell the difference between 44.1 kHz and 96 kHz, but there was no doubt the 96 kHz ones sounded better. Say what?!?

 

Long story short: Some plug-ins and virtual instruments don't oversample, and waveforms with extreme harmonic content create foldover distortion. So, recording at the higher sample rate eliminated the foldover distortion (this was the genesis of SONAR adding upsampling to the rendering process). However, this had nothing to do with extending the frequency response as heard by humans. It simply eliminated an issue inherent in digital audio that may, or may not, come into play with certain plug-ins.

 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...