by Craig Anderton
Maybe it’s just the contentious nature of the human race, but as soon as digital audio appeared, the battle lines were drawn between proponents of analog and those who embraced digital. A lot of claims about the pros and cons of both technologies have been thrown back and forth; let’s look at what’s true and what isn’t.
A device that uses 16-bit linear encoding with a 44.1 kHz sampling rate gives “CD quality” sound.
All 16-bit/44.1 kHz systems do not exhibit the same audio quality. The problem is not with the digital audio per se, but interfacing to the analog world. The main variables are the A/D converter and output smoothing filter, and to a lesser extent, the D/A converter. Simply replacing a device’s internal A/D converter with an audiophile-quality outboard model that feeds an available AES/EBU or S/PDIF input can produce a noticeable (and sometimes dramatic) change.
What’s more, one of digital audio’s dirty little secrets is that when the CD was introduced, some less expensive players used 12-bit D/A converters—so even though the CD provided 16 bits of resolution, it never made it past the output. I can’t help but think that some of the early negative reaction to the CD’s fidelity was about limitations in the playback systems rather than an inherent problem with CDs.
16 bits gives 96 dB of dynamic range, and 24 bits gives 144dB of dynamic range.
There are two things wrong with this statement. First, it’s not really true that each bit gives 6dB of dynamic range; for reasons way too complex to go into here, the actual number is (6.02 X N) + 1.76, where “N” is the number of bits. Based on this equation, an ideal 16 bit system has a dynamic range of 98.08 dB. As a rule of thumb, though, 6 dB per bit is a close enough approximation for real-world applications.
Going from theory to practice, though, many factors prevent a 16-bit system from reaching its full potential. Noise, calibration errors within the A/D converter, improper grounding techniques, and other factors can raise the noise floor and lower the available dynamic range. Many real-world 16-bit devices offer (at best) the performance of an ideal 14-bit device, and if you find a 24-bit converter that really delivers 24 bits of resolution...I want to buy one!
Also note that for digital devices, dynamic range is not the same as signal-to-noise ratio. The AES has a recommended test procedure for testing noise performance of a digital converter; real-world devices spec out in the 87 to 92 dB range, not the 96 dB that’s usually assumed. (By the way, purists should note that all the above refers to undithered converters.)
Digital has better dynamic range than analog.
With quality components and engineering, analog circuits can give a dynamic range in excess of 120 dB — roughly equivalent to theoretically perfect 20-bit operation. Recording and playing back audio with that kind of dynamic range is problematic for either digital or analog technology, but when 16-bit linear digital recording was introduced and claimed to provide “perfect sound forever,” the reality was that quality analog tape running Dolby SR had betters specs.
With digital data compression like MP3 encoding, even though the sound quality is degraded, you can re-save it at a higher bit rate to improve quality.
Data compression programs for computers (as applied to graphics, text, samples, etc.) use an encoding/decoding process that restores a file to its original state upon decompression. However, the data compression used with MP3, Windows Media, AAC, etc. is very different; as engineer Laurie Spiegel says, it should be called “data omission” instead of “data compression.” This is because parts of the audio are judged as not important (usually because stronger sounds are masking weaker sounds), so the masked parts are simply omitted and are not available for playback. Once discarded, that data cannot be retrieved, so a copy of a compressed file can never exhibit higher quality than the source.
Don’t ever go over 0 VU when recording digitally.
The reason for this rule is that digital distortion is extremely ugly, and when you go over 0 VU, you’ve run out of headroom. And frankly, I do everything I can to avoid going over 0. However, as any guitarist can tell you, a little clipping can do wonders for increasing a signal’s “punch.” Sometimes when mixing, engineers will let a sound clip just a tiny bit—not enough to be audible, but enough to cut some extremely sharp, short transients down to size. It seems that as long as clipping doesn’t occur for more than about 10 ms or so, there is no subjective perception of distortion, but there can be a perception of punch (especially with drum sounds).
Now, please note I am by no means advocating the use of digital distortion! But if a mix is perfect except for a couple clipped transients, you needn’t lose sleep over it unless you can hear that there’s distortion.
And here’s one final hint: If something contains unintentional distortion that’s judged as not being a deal-breaker, it’s a good idea to include a note to let “downstream” engineers (e.g., those doing mastering) know it’s there, and supposed to stay there. You might also consider normalizing a track with distortion to -0.1dB, as some CD manufacturers will reject anything that hits 0 because they will assume it was unintentional.
Digital recording sounds worse than vinyl or tape because it’s unnatural to convert sound waves into numbers.
The answer to this depends a lot on what you consider “natural,” but consider tape. Magnetic particles are strewn about in plastic, and there’s inherent (and severe) distortion unless you add a bias in the form of an ultrasonic AC frequency to push the audio into the tape’s linear range. What’s more, there’s no truly ideal bias setting: you can raise the bias level to reduce distortion, or lower it to improve frequency response, but you can’t have both so any setting is by definition a compromise. There are also issues with the physics of the head that can produce response anomalies. Overall, the concept of using an ultrasonic signal to make magnetic particles line up in a way that represents the incoming audio doesn’t seem all that natural.
Fig. 1: This is the equalization curve your vinyl record goes through before it reaches your ears.
Vinyl doesn’t get along with low frequencies, so there’s a huge amount of pre-emphasis added during the cutting process, and equally huge de-emphasis on playback—the RIAA curve (Fig. 1) boosts the response by up to 20 dB at low frequencies and cuts by up to 20 dB at high frequencies, which hardly seems natural. We’re also talking about a playback medium that depends on dragging a rock through yards and yards of plastic.
Which of these options is “most natural” is a matter of debate, but it doesn’t seem that any of them can make too strong a claim about being “natural”!
Craig Anderton is Editor Emeritus of Harmony Central. He has played on, mixed, or produced over 20 major label releases (as well as mastered over a hundred tracks for various musicians), and written over a thousand articles for magazines like Guitar Player, Keyboard, Sound on Sound (UK), and Sound + Recording (Germany). He has also lectured on technology and the arts in 38 states, 10 countries, and three languages.