Jump to content

Does the master fader in a virtual mixer change the balance of the individual tracks?


A. Einstein

Recommended Posts

  • Replies 53
  • Created
  • Last Reply
  • Members

If you hear a different mix when you turn the master volume up or down, it is because of the Fletcher Munson curve......your perception and not a real change in mix.

 

http://www.surfacedstudio.com/music/louder-is-better-the-fletcher-munson-curves/

 

And as to your 0 -6 -12 example. There may be a difference in the sound when the volume is turned up to compensate because.....FEWER BITS ARE USED IN LOWER VOLUMES. But the difference will be minuscule.

 

Dan

Link to comment
Share on other sites

  • Members

 

Does the master fader in a virtual mixer change the balance of the individual tracks



For example:


1) I make a mix - the master stereo fader is on 0 dB.


2) The when everything is balanced, I decide to take the master fader back by 6 dB.



Does that taking back the stereo fader change the loudness relation between the individual tracks?


Or asked the other way around, does the exported mix with the stereo fader on 0 dB sound the same as the mix exported with the stereo fader at -6 dB when I make this -6 dB mix 6 dB louder later?

 

 

This reminds me of the time when I used my lawn mower as a snow blower.

Link to comment
Share on other sites

  • Members

 

If you hear a different mix when you turn the master volume up or down, it is because of the Fletcher Munson curve......
your perception
and not a real change in mix.




And as to your 0 -6 -12 example. There may be a difference in the sound when the volume is turned up to compensate because.....FEWER BITS ARE USED IN LOWER VOLUMES. But the difference will be minuscule.


Dan

 

 

I agree with you Dan - - The main audible effect of the Fletcher-Munson effect is that the perceived amount of bass and treble drops as the sound level of the mix is reduced. The amplitude relationships within the mix should not vary (but might, depending on how well the software was designed), but what you hear will vary a LOT.

 

Another effect has to do with the listening environment. At high sound levels, the room modes will be excited and the frequency components of the mix that you hear will change. I assume Mr. Einstein would be working in a pro studio (judging by some of the photos he's posted), so this shouldn't be a huge factor. If he really wants to test the theory, he should consider doing it outside, well away from any structures, or in an anechoic chamber, to eliminate this contribution to what he hears.

 

The reduction in bits at lower levels will increase harmonic distortion due to quantization error, but the amplitudes of those signals should not vary, only their harmonic content. This would almost certainly be inaudible in any system using more than 22 bits/sample.

 

In short, it's your environment, ears and brain doing this, not the audio hardware and software....

Link to comment
Share on other sites

  • Members

 

If you hear a different mix when you turn the master volume up or down, it is because of the Fletcher Munson curve......
your perception
and not a real change in mix.


You're right, but this is beside the point of the OP, since he's compensating the volume after -- the level remains the same.

 

 

And as to your 0 -6 -12 example. There may be a difference in the sound when the volume is turned up to compensate because.....FEWER BITS ARE USED IN LOWER VOLUMES. But the difference will be minuscule.Dan

Bingo, with a possible exception of say an unmastered, uncompressed/limited mix (with rogue peaks) that doesn't clip, saved to 16 bits. Say the RMS is in the -18 to -24 range. Most likely only the peaks that should be limited exceed -6dB or even -12dB. So, one or two bits are wasted already. Saving that to 16 bits would leave you with 15 or 14 significant bits most of the time -- not too terrible. But if you now waste two of those bits you're down to 13 or 12 bits, where it doesn't take golden ears to hear the harshness caused by quantization noise -- fewer than 12 bits in a quiet passage.

 

But no doubt you're talking about 24 bits. The bottom 4 bits is just noise anyway, and even if they were significant signal, the difference would be miniscule.

 

Something a lot of people don't know about 24-bit converters: the bottom 4 bits are noise. That works a lot better than a 20-bit converter where the bottom 4 bits are zero, and I suspect that getting all 24 bits accurate wouldn't be possible without supercooling.

Link to comment
Share on other sites

  • Members

You have speaker sensitivity and preamp thresholds you're dealing with here.

I think thats going to trump most Fletcher Munson effects.

 

Speakers have minimum wattage/spl levels they can faithfully reproduce the full frequency spectrum.

Go below it and it starts sounding like crap. The woofers roll off before the tweeters because they are less efficiant.

Running speakers at 50~70% of the RMS max rating has always been a rule of thumb to get the

maximum frequency responce produced without noise and distortion.

 

As far as preamps or power amps for that matter, again running them in the 50~70% ranges allows

the semiconductors (or tubes) to run on a fairly flat frequency responce curve. This is one reason they recomend

an 85db listening level for studio gear. The amps and speakers are supposed to work efficiantly gain staged at that level.

Of course room acoustics matter too.

 

Having an over powered monitor system can be detrimental to getting a good mix in any case because you may be running

it below minimum sensitivity levels. This is where matching audio gear for the listening enviornment is part of the acousting engineering

trade. And its exactly why acoustic engineers are hired for studio designs.

Link to comment
Share on other sites

  • Members

Please help me. People think I am a spambot, I am NOT a spambot. I am chained to a desk in Sri Lanka and must post the same thing, over and over, in sites like this. Nike shoes, Ugg boots, Coach bags... over and over. They feed me bird seed and maybe a carrot on weekends.

 

The only thing that keeps me going is my love for the A. Einstein dog. What a cute puppy! I would like to hug that dog, and hold him in my arms while we talk about Ugg boots. Also I would like sponges to grow in general Martian telephones, but that is the next of roots-like Coca-Cola. Let's get to number one and be co-dependent! I like dogs!

 

Whiz the propmaster, airplanes shrink when drawers fly in paste. Kudos!

Link to comment
Share on other sites

  • Members

 

It depends entirely on how linear the entire signal path is including the monitoring path. Lots of people make assumptions they probably shouldn't.

 

 

I don't see how that relates to the original premise of the thread:

 

 

Does the master fader in a virtual mixer change the balance of the individual tracks


For example:


1) I make a mix - the master stereo fader is on 0 dB.


2) The when everything is balanced, I decide to take the master fader back by 6 dB.


Does that taking back the stereo fader change the loudness relation between the individual tracks?

 

 

I don't think he's talking about anything to do with monitoring. If the individual tracks have a certain level relationship with respect to each other, then bringing them all down by 6dB would preserve that relationship. If it didn't, grouping wouldn't work.

Link to comment
Share on other sites

  • Members

Okay Bob and Craig... here a diff'rent question:

 

 

a) I make a mix - the master stereo fader is on 0 dB until the mix is done.

 

b) Then I take all track faders back 1 dB by selecting them all together and move them all down 1 dB at once ---> For control that it is exactly 1 dB, I use one fader which is exactly a 0 dB and move that fader down 1 dB while all other track faders are also selected and follow.

 

 

Does taking back all track faders at once change the loudness relation between the individual tracks ---> or is the whole mix now exactly in the same balance only 1 dB softer?

Link to comment
Share on other sites

  • Members

 

...If the individual tracks have a certain level relationship with respect to each other, then bringing them all down by 6dB would preserve that relationship.

This assumes that his level relationships were chosen while listening to a perfectly linear monitoring system fed by a perfectly linear signal path. I've just learned to never assume that. Yes its picking at nits but so is balancing a mix. The way to avoid this sort of thing is by leaving plenty of headroom in mix level so monitoring non-linearity can't become a problem.

 

Certainly in most digital mixes, reducing the master fader makes more sense than reducing all of the channel faders like we needed to do with some analog consoles. For some reason people often make analog assumptions about the digital parts of a DAW and digital assumptions about the analog parts such as monitor path and converters.

Link to comment
Share on other sites

  • Members

 

I agree with you Dan - - The main audible effect of the Fletcher-Munson effect is that the perceived amount of bass and treble drops as the sound level of the mix is reduced. The
amplitude
relationships within the mix should not vary (but might, depending on how well the software was designed), but what you hear will vary a LOT.

 

 

All of which is irrelevant for two reasons: one, this is a somewhat trolly thread, and two, as Einstard explained, since he's bringing the volume back up, Fletcher Munson isn't an issue.

 

However, I believe you are contradicting yourself good sir.

 

Bass and treble IS amplitude.

 

Changing the perception of bass and treble = changing the perception of amplitude (at low and high freqs).

Link to comment
Share on other sites

  • Members

 

Or asked the other way around, does the exported mix with the stereo fader on 0 dB sound the same as the mix exported with the stereo fader at -6 dB when I make this -6 dB mix 6 dB louder later?

 

 

If you're genuinely curious, why not test it by comparing the two waves to see if they're identical?

Link to comment
Share on other sites

  • Members

What I want to know is: What is the decimation theory behind that all in relation to the perceived dynamics between different section, for example the Altflute playing ppp while the trumpets play forte...

 

Or what happen when the string section is peaking at -23 dBFS while the percussion is peaking at -2 dBFS, this in context when I take back all faders by 7 dB, because then the string section is peaking at -30dBFS, then the drumset still at -9 dBFS.

 

And whatare the logics behind the digital faders. When I take back two linked faders the result isn't linear.

Link to comment
Share on other sites

  • Members

This assumes that his level relationships were chosen while listening to a perfectly linear monitoring system fed by a perfectly linear signal path. I've just learned to never assume that. Yes its picking at nits but so is balancing a mix. The way to avoid this sort of thing is by leaving plenty of headroom in mix level so monitoring non-linearity can't become a problem.

 

I understand that, but unless I misread the OP I think the question was whether the individual tracks would retain their level relationship. With any modern audio engine, whether 32-bit float or 48 or 64-bit linear, there's enough headroom on individual channels that even if they're going into the red, the signal in the individual channels will show no evidence of distortion. Distortion will likely happen when those "hot" signals hit the master out and interface with the real world, but as long as those signals are in the box, the individual channels have for all practical purposes unlimited headroom from a computational standpoint and they are not "aware" that non-linearity is happening further downstream.

 

Bringing down the master fader will definitely maintain the level relationship of the individual channels, because all you're doing is a divide by operation, and with 32-bit floating point, the resolution is certainly sufficient to do this accurately. Granted a 16-bit engine might get dicey with low-level signals, but I'm assuming the question was about contemporary, not crappy - oops, I mean "vintage" - digital gear. :)

 

Now, regarding the monitoring part of the equation...assume that the master fader is turned up too high, and there is non-linearity at that stage, in the monitoring process. That doesn't affect the headroom on the individual channels, and they have, for all practical purposes, no headroom limitations as long as they're in the box. So, suppose you bring down the master fader by 6dB. Okay, all the individual channels are down by 6dB, and for the reasons given above they've retained their level relationship with respect to each other. Now you bring that mix back into the system. If the master fader is still down by -6dB, then yes, it's entirely possible the interface to the physical world is now "backed off" sufficiently that the non-linearity issues are either reduced or gone. But, as soon as that master fader is brought back to where it was originally, then the same non-linearities will return - not due to anything that's going on in the individual channels, but at the output, where the extreme amount of "in the box" headroom no longer applies because you've left the box.

 

Right?

 

Certainly in most digital mixes, reducing the master fader makes more sense than reducing all of the channel faders like we needed to do with some analog consoles.

 

Absolutely, and I hope people are paying attention because I'm tired of getting mixes that have crunches because the master fader is too high... :mad: They just don't realize that's where the bottleneck occurs.

Link to comment
Share on other sites

  • Members

 

What I want to know is: What is the decimation theory behind that all.

 

 

It's not about decimation, that's more involved with sampling rates and you're talking about amplitude. Amplitude changes are "digital audio 101" - simple mathematical multiplies and divides. For example if you want to bring something down by half, you just multiply it by 0.5.

Link to comment
Share on other sites

  • Members

 

All of which is irrelevant for two reasons: one, this is a somewhat trolly thread, and two, as Einstard explained, since he's bringing the volume back up, Fletcher Munson isn't an issue.


However, I believe you are contradicting yourself good sir.


Bass and treble IS amplitude.


Changing the perception of bass and treble = changing the perception of amplitude (at low and high freqs).

 

 

I'll concede the point about the volume being brought back up - - IF he's using a sound pressure meter while doing so.

 

On your other point, I was referring to perceived loudness at those frequencies at different sound pressure levels, versus the actual relative amplitudes of signals at those frequencies. If you think these are the same, you are just wrong. This is exactly the scenario the Fletcher-Munson curve addresses.

 

Ultimately, perception is not reality any more than knowledge is understanding. The former is just an approximation to represent the latter inside someone's mind.

Link to comment
Share on other sites

  • Members

 

I'll concede the point about the volume being brought back up - - IF he's using a sound pressure meter while doing so.

 

 

I'm not sure about that.

 

The Fletcher-Munson equal loudness curve was made by testing persons, possibly with headphone similar to pure tone audiometric test audiologist make, but I am not sure about that either.

Link to comment
Share on other sites

  • Members

My question would be are all DAW programs be equal here.

I know for example the older version of Cubase I have appears to sound

more analog ajusting the tracks and seem to have a sweet spot mixing simular to an analog mixer.

Sonar's mixer seems to be very linear maintaining frequency linearity with gain changes.

 

It may just be how the virtual mixer playback is wired the interface to give those percieved effects.

Or I may have been illusional to some extent I just dont know how much.

 

I have done enough tests to satisfy my curisoty on this by importing the same tracks

into various daws and I was able to confirm the mixdown results were simular enough to

conclude the mixer didnt equate to different results. When I had the best mix possible on both they

showed no signs or coloration like different analog mixers do.

 

I'm still thinking playback features or wiring of different virtual mixers may vary enough where the mixer using his

ears may guide you to mixing differently. I doubt all DAW manufacturers use the same codes but I'm no software expert

so I dont know when and where processing occurs and how the data may vary in results.

 

There are some things I believe will be factors that should be considered.

 

Meter responce is different with different DAW programs. Some programs like in Sonar have adjustments for meter

scaling and transient responce speed. Are all DAW programs defaults the same on these?

 

If the speed is slow and peaks are missed you may wind up mixing too hot. Too fast and you'll see the transients and mix cooler.

The tempo of transients will peak meters differently as well especially with comps running.

 

Another item is panning laws. Some DAWs have asjustable panning laws and some dont. I know some wont affect tracks panned mono

but they do panned in a stereo mix.

 

Again, If you use the same DAW programs for comparisons and strip these items out of the equasion and stick to only theory

I can definately go along with what Craig has been posting, but since these variable are present in DAW programs to make them appear

more analog like, I'm wondering if we may be excluding key items here. They may have no affect on the data and be purely illusional

but they do influence how I mix to the point where I cant completely trust what the console meters say i'm hearing and seeing.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...