Members Sillypeoples Posted October 18, 2012 Members Share Posted October 18, 2012 Originally Posted by Hard Truth There are dozens of factors that will influence the audio quality of your recordings more than the DAW or PC that you use. They include:the quality of the sound sources - Decent guitar microphone choice - None mic placement - None recording room acoustics - N/A direct to DAW preamp quality - Fostex? the quality of any devices between the preamp and the audio interface - SS AMP the quality of the audio interface to your computer - DAW - USB - Laptop the recording levels you recorded with - No clipping the quality of the plug-ins you use - What's the standard? your ability to do a good mix - Subjective It's like playing on stage at a jam where you bring the SS amp, he brings a nice tube amp...I can hear it...not everyone can. You can out play the guy, but he just sounds better.... Link to comment Share on other sites More sharing options...
Members BushmasterM4 Posted October 18, 2012 Members Share Posted October 18, 2012 Originally Posted by Sillypeoples I record at 44 and upload to Soundclick. It's very apparent to me that at a certain ranking, I hit a wall, and that wall was recording quality.... and I go direct to DAW...the guys above me in rankings, all things equal, just sounded better. The only thing I could do was get a better DAW and maybe better software and maybe a better PC, which is all about computing that sampling and data at a higher resolution, more accurately, with less issues. Can you hear the 44 vs 48 difference? Throw up your music on Soundclick, see how high it goes...if you get to say the top 20 and find everyone just sounds better then you at the higher rankings...you tell me what else you can do other then to get the gear that makes you sound even better. I've heard some really nice stuff done with low end gear. It all boils down to, you put {censored} in you get {censored} out. Link to comment Share on other sites More sharing options...
Members Anderton Posted October 18, 2012 Members Share Posted October 18, 2012 Originally Posted by BushmasterM4 I've heard some really nice stuff done with low end gear. Me too, and I've heard some pretty awful stuff with expensive gear.That said...I'm really serious about the mastering aspect. I've gotten some projects in for mastering that were pretty sketchy in terms of sound quality, but was able to polish them up pretty nicely. As long as people don't give me tracks that are distorted, they can invariably be improved. IMHO distortion and misuse of dynamics are the real quality-killers, more so than the gear. Link to comment Share on other sites More sharing options...
Members UstadKhanAli Posted October 18, 2012 Author Members Share Posted October 18, 2012 It's not so much the sound quality between 44.1kHz and 48kHz. That's negligible. It's more an issue of this: is it more advantageous to now record at 48kHz if most people are listening to our audio on MP3s (where the 44.1kHz doesn't matter) and video (including YouTube and Vimeo)? Link to comment Share on other sites More sharing options...
CMS Author MikeRivers Posted October 18, 2012 CMS Author Share Posted October 18, 2012 Originally Posted by Sillypeoples Can you hear the 44 vs 48 difference? Throw up your music on Soundclick, see how high it goes...if you get to say the top 20 and find everyone just sounds better then you at the higher rankings...you tell me what else you can do other then to get the gear that makes you sound even better. Well, for starters, they can have better songs, they can play them better, and they can record and produce them with more skill and money than you have. What do you hear when you listen to those top 20 songs? You don't hear sample rate, that's for sure. There is certainly a point below which "surface quality" is noticeable, but that's not what sells, or doesn't sell a recording. It's the music and how it's performed. Link to comment Share on other sites More sharing options...
CMS Author MikeRivers Posted October 18, 2012 CMS Author Share Posted October 18, 2012 While I don't think this research paper in itself justifies recording wider bandwidths than 20 kHz, it's an interesting study in what comes out of musical instruments that we can't directly hear. Note that a spectrum analyzer with an effective sample rate in excess of 200 kHz was used in these experiments. Link to comment Share on other sites More sharing options...
Members WRGKMC Posted October 18, 2012 Members Share Posted October 18, 2012 I had done a batch of solo recordings recently. Nothing special, just my normal sonwriting stuff. I went through mixed, mastered and burned CDs and played them in the car. After listening to them I'm asking myself why the heck the mixes have no edge and figured I must have blown it mixing the highs. After checking the mixes and not finding anything unusual I started wondering whether the new DAW I had just set up could be the cause. I was using the same interfaces and didnt think the sound would change but started retinking that to be the cause or maybe it was just me and just blew it off. I later found out after batteling with the mixes that my settings had defaulted back to 44.1 at some time during the DAW changeover. I did another batch of recordings at the higher sample rate, mixed and mastered, and made another CD with a mixture of the old and new recordings. The results were like Wow what a difference. There was a noticable improvement in the brightness and clarity of all the new recordings. One other note the recordings I had mixed at 24/44.1 were mixed down (actually up sampled) in Sonar to 32/48 for mastering. I though I was just keeping the same sample rates. I dont know if this up sampleing was any factor on the sound quality. I'd think it would be benificial for mastering if anything. In any case I'm back to 24/48 and I find the extra edge/clarity good for the stuff I do. I had tried 24/96 in the past and just couldnt hear enough benifit to justify the extra space. The difference between 44,1 and 48 is subtle. Someone with Older or untrained ears or someone using much higher quality gear tracking may not notice the difference. I knew something wasnt right for a good month or so. My guitar presets, drum machine, vocal mic settings hadnt been changed all that much. Not enough to change what I was hearing. I just wasnt getting good results. Then when I did that A/B test between the 44 and 48 recordings the cause of what the older recordings were lacking were obvious to even my ears which is pretty good considering how many years I've played in loud bands. Link to comment Share on other sites More sharing options...
Members Sillypeoples Posted October 18, 2012 Members Share Posted October 18, 2012 W - Thanks for making my point...here's some science...The argument goes that harmonics in the highest frequencies, although well beyond the range of human ears, have a phase cancelling effect that effects the ultimate tone of a sound. Cutting these off during recording thus changes the accuracy of the recording. The higher you cut, the less of an effect there is on a tone....comes to mind is that it's better to record at 88 than 96 if your target is 44, because when you down sample, it is an even division, not an approximation (88/2=44 96/2.1818181=44)As you may know, bit depth represents resolution in amplitude and the sampling rate (see Nyquist theorem) implies resolution in the frequency range sampled. In 44.1k you'll be taking two samples to digitalize an entire circle of a sinusoid (or other form) of 22khz, that makes a poor representation of the original wave. When you have 20 tracks with a bad representation of higher frequencies or harmonics mixed in your work you get a thing that just doesn't sound as you may want to because multiplies of the misrepresentation of the original waveforms and multiplies it by 20. By recording at a higher sample rate you try to minimize this effect in your 20 sources so when you mix and edit them you don't carry this unwanted effect (that's why usually the audio engines process effects internally at higher bit depth and sampling rate and lately the sound is down sampled). The idea seems to be: "record, edit and mix with precise sound and do damage only once, at the end of work"This video should help to explain better the concept. Link to comment Share on other sites More sharing options...
CMS Author MikeRivers Posted October 18, 2012 CMS Author Share Posted October 18, 2012 Originally Posted by Sillypeoples W - Thanks for making my point...here's some science...The argument goes that harmonics in the highest frequencies, although well beyond the range of human ears, have a phase cancelling effect that effects the ultimate tone of a sound. Cutting these off during recording thus changes the accuracy of the recording. This is one plausible explanation for why we hear things differently if we cut off everything above the nominal range of human audibility. There are a few others. ...comes to mind is that it's better to record at 88 than 96 if your target is 44, because when you down sample, it is an even division, not an approximation (88/2=44 96/2.1818181=44) It may seem obvious, but that's not the way it works. You don't just leave out every other sample when going from 88.2 to 44.1 kHz. If you do, you get something that doesn't sound very good. You have to (mathematically) re-create the original waveform from the samples and then (mathematically) sample that. A good sample rate conversion algorithm sounds fine and offers no perceptible degradation over sampling at the lower rate originally. In 44.1k you'll be taking two samples to digitalize an entire circle of a sinusoid (or other form) of 22khz, that makes a poor representation of the original wave. I have to call bull{censored} here. If you follow the rules (Nyquist's and Shannon's) 2 samples is sufficient to accurately reconstruct the original source. Remember - everything above 1/2 the sampling frequency gets filtered out, so there will be nothing but the 22 kHz sine wave left. This fallacy, however, will probably never die because it's on the Internet. The idea seems to be: "record, edit and mix with precise sound and do damage only once, at the end of work" That's always good advice, but better advice is not to do any damage. Link to comment Share on other sites More sharing options...
Members Anderton Posted October 18, 2012 Members Share Posted October 18, 2012 Originally Posted by UstadKhanAli It's not so much the sound quality between 44.1kHz and 48kHz. That's negligible. It's more an issue of this: is it more advantageous to now record at 48kHz if most people are listening to our audio on MP3s (where the 44.1kHz doesn't matter) and video (including YouTube and Vimeo)? 48kHz is the "standard" for video, but I think MP4 is YouTube's "native" format, in which case I'm fairly sure the audio is converted to AAC anyway. When I render for YouTube uploads, I use 320kbps because YouTube does their own compression IIRC - so the better the source, the better the final version.Sorry to be so sketchy, I'm not sure of the details. Link to comment Share on other sites More sharing options...
Members Anderton Posted October 18, 2012 Members Share Posted October 18, 2012 I think I'll record some projects at 48kHz and see what happens. Haven't done that since early ADAT days because I didn't hear any significant differences, but hardware has changed a lot since then. Link to comment Share on other sites More sharing options...
Members JeffLearman Posted October 18, 2012 Members Share Posted October 18, 2012 Interesting paper, Mike. I saw another interesting study (but unfortunately nobody ever confirmed it independently) showing that people who listen to gammelon music (which has a lot of HF content) could hear the difference and preferred recordings made at higher rates. This included a total audio chain that preserved higher rates; all lab-bench quality gear, including speakers. That's all beside the point of the OP, of course. Regarding other things being more important: no duh! I know a guy who says, "If you can't make a hit recording with a soundblaster and a stick mic, you probably can't make a hit recording with anything." I think he's right. But still we know we can do better than the stick mic and soundblaster! Link to comment Share on other sites More sharing options...
Members JeffLearman Posted October 18, 2012 Members Share Posted October 18, 2012 Originally Posted by MikeRivers I have to call bull{censored} here. If you follow the rules (Nyquist's and Shannon's) 2 samples is sufficient to accurately reconstruct the original source. Remember - everything above 1/2 the sampling frequency gets filtered out, so there will be nothing but the 22 kHz sine wave left. This fallacy, however, will probably never die because it's on the Internet. Right. When you start trying to use intuition regarding this stuff, you can quickly wander into the weeds. (BTW, 2 samples is sufficient if it's a sine wave, though not "any sinusoid". But it's the sine wave that matters, assuming the theory is correct, and it's mathematically well proven.) Link to comment Share on other sites More sharing options...
CMS Author MikeRivers Posted October 18, 2012 CMS Author Share Posted October 18, 2012 Originally Posted by learjeff Right. When you start trying to use intuition regarding this stuff, you can quickly wander into the weeds. (BTW, 2 samples is sufficient if it's a sine wave, though not "any sinusoid". But it's the sine wave that matters, assuming the theory is correct, and it's mathematically well proven.) It's all about The Law. Professor Fourier showed us that any arbitrary waveform can be broken down into the sum of one or more sine waves, but the trick to staying within the law is that all of the sine waves that comprise that arbitrary waveform must be below the Nyquist limit. You can record and accurately recover a 20 kHz sine wave, but if you try to record a 20 kHz square wave, which is made up of a 20 kHz sine wave and as many odd multiples of 20 kHz sine waves that you can shake a sampler at, if you don't filter out the content >20 kHz going into the sampler, you'll get aliasing, which will result in some frequencies in the output that weren't in the input (hence, distortion). And if you filter out everything greater than 20 kHz going into the sampler, you're left with just a 20 kHz sine wave - which will be reproduced accurately. The trick here is that what went in contained frequencies higher than what can be properly sampled, so garbage out. Of course if you believe that you can't hear anything above 20 kHz and filter it out, you're able to accurately record what you hear.Which brings us around to the original question. It's all about whether you believe Boyk et. al. or don't care. But if you're using better grade equipment to get 2x sample rates, then you no longer have a valid comparison for your experiment or demonstration. The gamelan is an interesting subject, though. Since there are a lot of metal clangy things going on, there's a lot of high frequency content, probably a lot of it above 10 kHz. You can record what mixes in the air, but if our ears take in individual sounds and our brain makes it sound like a gamelan there may be something going on there that isn't completely represented by sampling. I remember talking to some folks at a NAMM show one year who were peddling a 4x and 2x sample set for a pipe organ. They sampled individual pipes and combined them in software the way all the levers and valves combine them in an organ. Their point was that if they didn't capture the full frequency spectrum of the pipes before they were combined, they wouldn't get the same thing as if they sampled the organ with all the pipes playing at once. Interesting theory, but then after a couple of years, I didn't see them again. Link to comment Share on other sites More sharing options...
Members mister natural Posted October 19, 2012 Members Share Posted October 19, 2012 I have to stay out of this conversation as you cats are far smarter & more experienced than I am at this stuff . . . I do love Hard Truth's 10 points of recording quality kill-points laid on top of the quality of the performance of each "take" then add to that ~90%+ of the listening consumer using a lossy mp3 copy of one's blood, sweat & tears' work thru fairly cheap earbuds and I'm not gonna worry about the math of digits any longer peace Link to comment Share on other sites More sharing options...
Members Beck Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by MikeRivers I have to call bull{censored} here. If you follow the rules (Nyquist's and Shannon's) 2 samples is sufficient to accurately reconstruct the original source. Remember - everything above 1/2 the sampling frequency gets filtered out, so there will be nothing but the 22 kHz sine wave left. This fallacy, however, will probably never die because it's on the Internet. I Link to comment Share on other sites More sharing options...
Members WRGKMC Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by Sillypeoples W - Thanks for making my point...here's some science... I do fully understand the science in back of it. In my case I got caught in my own blind taste test as you might call it.The problem was I didnt know I was taking the test but knew something was wrongfrom my final results. My final mixes had a thin veil over them, as though they were down sampled to am MP3 when I was working with wave files. I did go back and remix several times to get the best possible mixes. Much of which consisted of eaking the trebble responces.In any case I had to work a whole lot harder to get what I knew I could get and even then It fell short. As a note, My first DAW used for a good 10 years only recorded at 16/44.. so I got really good at getting results ath those sample rates.I suppose it was the drum machine I use that gave me the clues. The ones I use have really good clarity. My recorded tracks normally sound as good as the unitsjust playing live before tracking. I could hear the losse at 44.1 and had to eak my EQ's which I hadnt been having to do.Same goes for my other instruments. My rack units have many presets set up and other than guitar strings geting old, those tracks werent so hot either. Then when the songs rcorded at 44.1 and 48 were placed on the same CD it was like WTF is going on. I also noticed the losses using Har Bal mastering. (Har Bal is a mastering EQ/frequency analizer which allows you toscan the mixdown and EQ the wave file from the static frequency responce it generates). Songs recorded at 44.1 would slope off to zero 18K~20k and if you tried to boost the frequencies up there all you'd get was grainy unusable noise. Recordings at 48K would taper off less gradually in the upper frequencies then drop off completely above 20K. Some say the frequencies up there arent heard which is true in most cases. The recording with 18K roll off means you have 16K usable music contenct without HF noise. The rcording with a solid 20K roll off means you have solid 18K musical content without HF noise The difference is heard in the tweeters of the playback system. Those cymbals snare and high hat really do come through better in my case. When I tried using 24/96 I couldnt hear much difference over 24/48. I wrote this off to the quality of my studio gear. I'm not using $20K mics and preamps and I doubt that my monitors or playback systems would reveal any better quality then I'm getting even if the details of the higher sample rates were captured. If I eventually get better gear I'll take advantage of the higher sample rates. As for now, I live in real-ville. Some may be able to justify using sample rates above and below. They have dfferent ears setups and music. I understand what I get and only worry about what makes a difference to my recording and what I can actually hear. Link to comment Share on other sites More sharing options...
Members Johnny-Boy Posted October 19, 2012 Members Share Posted October 19, 2012 Since I record tracks for TV I mix down to 48kHz/16-bit - the Industry standard.John Link to comment Share on other sites More sharing options...
CMS Author MikeRivers Posted October 19, 2012 CMS Author Share Posted October 19, 2012 Originally Posted by Beck I Link to comment Share on other sites More sharing options...
Members JeffLearman Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by MikeRivers Increasing the sample word length means that the value of each sample is more accurate. But eventually you get to the point where you're sampling noise, and beyond that, longer word length samples are of no practical use. Actually, that's not quite true: adding bits of noise below the bottom significant bit produces better results. Today's "24-bit" converters are actually 20-bit converters with 4 bits of noise, and the results are better for this than if they simply provided the 20 accurate bits.I first bumped into this phenomenon with video data for medical imaging, back in the 80's. The ADCs were 10-bit accurate, but the pictures looked incredibly better because the ADCs returned 12-bit values with the bottom two bits being noise. For example, if the image should fade from white on one end to black on the other, without the random bits, you very clearly see lines where the shade changes. Add the noise, and the lines fade dramatically. The impact was significant on the actual images (which is why the hardware was designed that way).Admittedly, in the brain, visual processing is quite different from audio processing. The obvious lines in the example above (called "mach bands") are actually enhanced by neural processing in the retina before the signal even reaches the brain. There's no direct corrolary to that in audio processing. But the concept still holds for audio, and it's part of the reason for dithering when reducing bit width.This is just a quibble, and doesn't contradict your points. Link to comment Share on other sites More sharing options...
Members Anderton Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by WRGKMC Some say the frequencies up there arent heard which is true in most cases. I wonder if what happened was better transient response when you used 48kHz, which would have an effect in the audible range.I tend to go with musicians who swear they hear something under conditions such as yours, where you don't know what's going on so that doesn't influence your conclusions - then you have to work backward to figure out why something sounds different. Link to comment Share on other sites More sharing options...
Members Anderton Posted October 19, 2012 Members Share Posted October 19, 2012 Hey learjeff - thanks for your contributions to this thread, and to this forum in general. I find your posts very informative. Link to comment Share on other sites More sharing options...
Members veracohr Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by Sillypeoples The argument goes that harmonics in the highest frequencies, although well beyond the range of human ears, have a phase cancelling effect that effects the ultimate tone of a sound. Cutting these off during recording thus changes the accuracy of the recording. Wouldn't this phase cancelling be acoustic, and thus happen whether the sampling captures those frequencies or not? Link to comment Share on other sites More sharing options...
Members Zooey Posted October 19, 2012 Members Share Posted October 19, 2012 Originally Posted by veracohr Wouldn't this phase cancelling be acoustic, and thus happen whether the sampling captures those frequencies or not? That was my thought. If frequencies outside our range of hearing are affecting things within our range of hearing, then 44.1k should be able to capture it. Link to comment Share on other sites More sharing options...
Members WRGKMC Posted October 19, 2012 Members Share Posted October 19, 2012 I wonder if what happened was better transient response when you used 48kHz, which would have an effect in the audible range. Theres some of that too. All the frequencies do have better definition at 48. I can only vouch for the cards I'm using which are M-Audio 1010LT cards. They use line level inputs so they lack a second stage of preamplification for mics. There are two XLR inputs with jumpers on the board for mic level gains but I have those set for line level too. I use external analog preamps into the cards line levels inputs. I know I could do better but I bought three of these cards for $50 each so I saved $600 on buying them new. They only have breakout dongals so you need a patch bay to connect them up and not have to deal with the stupid RCA jacks. The converters probibly arent that great, but I've delt with sub par gear all my life and know how to get the best with what I got. Its just the final product that matters. If I did a mix at 24/44.1 and one at 24/48 and mixes straight down to a burnable wave file I probibly couldnt tell a difference. I master my work and I believe thats where the increased sample rate makes a difference. My mastering chain is simple. I EQ with HarBal and Loudness match to a reference file. I use waves multiband and L2 limiter. The L2 limiter gives the music a nice sharp rock edge over other limiters I have. Then I dither and downsample to 16/44.1 and burn a CD. Nothing special just basic mastering. I'm thinking its my mastering plugins that do better at 48. As I said I can see the difference in the top end when using the Frequency analizer, and the frequency peaks throughout the frequency spectrum seem to have sharper peaks. I'm guessing maybe theres more harmonic content retained in the mix. Anyway it works. I dont need to know why beyond fufilling my own curiosity. Link to comment Share on other sites More sharing options...
Recommended Posts
Archived
This topic is now archived and is closed to further replies.