Jump to content

What's the industry standard sampling rate for pro studios - 48 or 96 kHz?


Recommended Posts

  • Members

The rule of thumb I tell interns is any time you start to worry about sample rate, change the microphone. If you're still worried about sample rate, move the microphone. If you're still worried about sample rate, try a different preamp. Patch an EQ. Patch a compressor. Rebalance your mix. Listen on a different set of speakers. In other words, there are so many better things to worry about that will improve the sound and better your skills as an engineer than sample rate.

 

Always work at 24bit. Don't sample rate convert unless/until absolutely necessary. Then forget about it and get on with life.

Link to comment
Share on other sites

  • Members

 

I saw that too but couldnt see any logic in doing that. Dithering twice destroys any benifits gained mastering at a highter sample rate.

 

 

...and mastering at a higher sample rate than you recorded gives an extended frequency range, to reproduce entirely inaudible tones that are certainly absent, since you recorded at a lower rate that could not capture them, which will then, in turn, be unable to be produced by the final playback medium's sample rate anyway. A artful fail, nuanced, and multilayered.

Link to comment
Share on other sites

  • Members

I know in digital photography they will upsample to work with photos to manipulate the photo with minimal damage to the original. But they basically take a lower resolution photo, scan it at high resolution, then synthasize whats non existant to get the resolution higher.

 

There are a few tools that can synthasize high frequencies for example that may benifit. The best results I'm able to get are to master at whatever the recording rate was, then down sample and dither as the last step after brickwalling.

 

I did experiment upsampling for awhile in hopes of getting better results when I really didnt know what I was doing. I did compare the differences with an accurate static frequency analizer to see if there were any benificial results my ears might have been missing. I couldnt find anything that occured that was a majorly detrimental working that way. I didnt find any improvements either.

 

When I compared the two mastered with the same plugins and settings, I just heard the upsampled lost something on the high end especially with the cymbals. The other had more body, a more 3d sound quality and air.

 

I realise thats non scientific, but my experience has taught me a minimilist attitude towards processing just works better than anything else. The less I can do to a Wave, the more natureal realism is preserved. Strip noise, tweak rogue frequencies, thats normal. If I can do something with a single plugin, vs using two or three to do the same job, its less math manipulation of the wave and it preserves whats there.

 

Of course if it wasnt there to begin with, you're in the realm of restoration or synthasis. I've wasted too many hours trying all techniques to make something dull shine. Its totally unproductive though. I'll take quality tracking any day of the week. Tweaking crappy tracks is hard work and will make you old before your time.

Link to comment
Share on other sites

  • Members

 


...Of course if it wasnt there to begin with, you're in the realm of restoration or synthasis. I've wasted too many hours trying all techniques to make something dull shine. Its totally unproductive though. I'll take quality tracking any day of the week. Tweaking crappy tracks is hard work and will make you old before your time.

 

 

The hilarious part, to continue the photography analogy, is that we're talking about taking a photo with a digital camera that doesn't capture the (invisible to humans) UV spectrum, converting it to a medium that does, editing the photo in a manner that disregards UV, and then printing it with inks that do not reproduce it, then arguing that it increases the dpi count by doing so.

Link to comment
Share on other sites

  • Members

Very true. I've worked in that field on my day job for nearly thirty years now for most of the major manufacturers. A small amount of Infared and UV may strike the photocells and may influence the signal withing the cell itself but its quickly filtered out in subsequent circuits. It would be like the preamp in an interface, if the frequency responce is 20~20K, anything above that wont make it to the converters. Also how many mics reproduce frequencies above 20K even with harmonics involved. Guitars may produce string tones up to 5K or so, etc etc.

 

If the source cant produce untra high frequencies, and the ears cant hear those frequencies if they could, why waste space and CPU resources in it. Its amazing how simular digital photography and video are simular to digital audio and how the signals are manipulated, compressed, filtered, Saved, transmitted, printed with Lasers, thermal arrays, monitors, printers etc.

 

Instead of audio frequencies being sampled into steps of voltages, the Photocel voltages produced by light are sampled. Some of the terms mean different things, like compression in Digital photography, is used to simplifying redundant information and save space. Compression in analog has a different meaning because the Terminology was carried over from Analog to digital recording. How its performed in the two formats is quite different than most may think. Not that its that important so long as the results are simular.

 

To get higher resolution in audio requires working with more data and having a program that takes advantage of that data. If the program ignores a good portion of that data or filters it out what it cant use, whats the scence in having all the extra data to begin with. All it does is take up memory and lowers performance. Some say high order harmonics help add to the realisim. I agree with that to a point.

 

Having super high frequency resolution recreated above the human hearing levels is questionable, weather you're a believer in phychoacoustics or not. Played back on an auto stereo that may have a rolloff on the speakers at 16K max or using earbuds that have maybe a 10k range, just isnt going to reproduce those frequencies weather they're present or not. The transducer materials have improved alot in recent years, but they are still very primative substances that vibrate air at different frequencies and cant possibly make use of all the frequencies an amp produces. Inertia alone smooths many transients so there is always going to be losses. Luckily the mind is an easy thing to fool.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...