Jump to content

We haven't had a good EQ discussion in a while...


Recommended Posts

  • Members

...flat probably is an instruction set to do no processing anyway...

 

 

FWIW, a flat EQ in a DSP requires the same amount of processing (MIPs) as any other curve. Although you could save processing by just not performing the math, that is never done as it requires many more lines of code (every additional line of code is another opportunity for failure). Instead, it is much more efficient to keep the qty of stages constant and just populate coefficients that result in a flat curve. (and this is just one of many reasons to do things this way).

Link to comment
Share on other sites

  • Replies 68
  • Created
  • Last Reply
  • Members
FWIW, a flat EQ in a DSP requires the same amount of processing (MIPs) as any other curve. Although you could save processing by just not performing the math, that is never done as it requires many more lines of code (every additional line of code is another opportunity for failure). Instead, it is much more efficient to keep the qty of stages constant and just populate coefficients that result in a flat curve. (and this is just one of many reasons to do things this way).

It would seem to me the "least work" way of coding that would be to just switch to grabbing the input value instead of the output value of the EQ code rather than zeroing out all the coefficients and then having to restore them if/when someone hit the "defeat" button again? Then again I'd think a "true" bypass that just skipped the computations would only take a single instruction assuming the "defeat" state was in a register and the input and output of the EQ code was in another single register?

Link to comment
Share on other sites

  • Members
It would seem to me the "least work" way of coding that would be to just switch to grabbing the input value instead of the output value of the EQ code rather than zeroing out all the coefficients and then having to restore them if/when someone hit the "defeat" button again? Then again I'd think a "true" bypass that just skipped the computations would only take a single instruction assuming the "defeat" state was in a register and the input and output of the EQ code was in another single register?



Then you haven't coded a DSP :)

Link to comment
Share on other sites

  • Members
Last one we cheated and used "Misra C" for the framework code for the TI DSP - made things a whole lot easier. Musta been OK because I haven't heard of any F-35's falling out the sky recently
;)
.



thats because the hangar doors were made by behringer and no one can get them to open:poke:

Link to comment
Share on other sites

  • Members
Last one we cheated and used "Misra C" for the framework code for the TI DSP - made things a whole lot easier. Musta been OK because I haven't heard of any F-35's falling out the sky recently
;)
.



and my guess is the F-35 didn't have a killer audio system in it. :)

Funny things happen in the audio domain when you vary processing paths in a DSP (and even worse when you have differences in stages or other processing steps). There are work arounds, but why when it buys you nothing (if you're running with no MIP or thermal margin, you have bigger issues), and adds complexity?

Link to comment
Share on other sites

  • Members
and my guess is the F-35 didn't have a killer audio system in it.
:)

The optional 100kw laser makes up for that in kill power :).

Funny things happen in the audio domain when you vary processing paths in a DSP (and even worse when you have differences in stages or other processing steps).

I'll agree you don't want constant jitter but a one time change in latency when you poke the "defeat" button isn't gonna sound any worse than the artifacts you'll get from instantaneously zeroing out the coefficients I'd think? Or do you ramp them to zero to simulate an operator turning them to zero manually? I'm sure we've all heard about the 31 band GEQ's on the StudioLive "zippering" when you adjust them :facepalm: .

There are work arounds, but why when it buys you nothing (if you're running with no MIP or thermal margin, you have bigger issues), and adds complexity?

Hey, whatever is easiest to code works for me :) . Lots of audio DSP stuff can't run all the processing its capable of simultaneously so's I'd tend to practice cycle conservation just on general principle to be ready for that next project that requires it :cool: .

Link to comment
Share on other sites

  • Members

Funny things happen in the audio domain when you vary processing paths in a DSP (and even worse when you have differences in stages or other processing steps). There are work arounds, but why when it buys you nothing (if you're running with no MIP or thermal margin, you have bigger issues), and adds complexity?

 

 

Very true, when mixing parallel paths it's essential to tally latency and keep it identical across parallel signal paths that are likely to be combined either within the device or external to it.

 

Generally, a latency diagram of all signal paths is kept and any place where there is summing, latency must be added to the earlier signal to make them coherant. Variable latency is a real problem in a device such as a mixer.

Link to comment
Share on other sites

  • Members
Very true, when mixing parallel paths it's essential to tally latency and keep it identical across parallel signal paths that are likely to be combined either within the device or external to it.

I'm trying to think of a parallel signal path in a typical mixer?

Link to comment
Share on other sites

  • Members

I'm trying to think of a parallel signal path in a typical mixer?

 

 

Every channel, every bus is a parallel path.

 

This applies to analog consoles too, mainly tallying polarities so that whatever functions and routing options are selected, the summing occurs with correct polarities. Say you play back a stereo CD or a stereo keyboard, or any signals where there is common sound like drums. When you sum together, polarity and/or latency should be identical.

 

This is actually one area where digital consoles can excel as latency (essentially "phase" or delay) can be independantly controlled through time make-up code.

Link to comment
Share on other sites

  • Members

I can certainly think up a parallel path but when is that ever desirable? I'd think any parallel path created in operation was a f'up but am truly curious when you'd want to do such on purpose? I obviously don't have experience using the big boards with VCA's, 8+ groups, matrices, etc.

Link to comment
Share on other sites

  • Members

Remember, we're talking SW chains now not analog signal flows. Things are different in the SW domain. I mean, we are talking about people that count from zero. :)

In the analog world, you feed the output of one stage (in an EQ) into the input of the next. In the SW world, you process each stage individually then sum them (i.e. a parallel signal path).

Link to comment
Share on other sites

  • Members
In the analog world, you feed the output of one stage (in an EQ) into the input of the next. In the SW world, you process each stage individually then sum them (i.e. a parallel signal path).

Within the EQ sure. I'm pretty sure you'd not want to split the signal to feed the gate and the EQ then sum them afterwards :freak: .

Oh, and there are plenty of parallel EQ designs that are analog - my 'ringer PEQ2000 is for instance. I suspect most (all ?) 31 band GEQ's are too?

Link to comment
Share on other sites

  • Members
Sorry, I was replying to Mutha Goose. I'm not good with the quote button...

No prob :lol: . Yah, within an EQ section there may be parallel paths but I can't think of any "normally used" routing on a mixer that would have the same signal summed through more than one path except through an FX and there time offset might be what you're after :) but otherwise not a problem I think?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...