Jump to content

Game Changer


Recommended Posts

  • CMS Author

 

Actually, I think it's going back up. Remember when everyone recorded at 16 bits? I hardly know anyone who doesn't record at 24 bits these days.

 

 

I record 16-bit or 256 kbps MP3 with my handheld recorder. There's no benefit, given what's going in, to use anything "better." In the studio, sometimes I'll record at 24 bit, but it's more on a whim than because it sounds better.

 

 

Audio engines have gone from 24-bit fixed to 32-bit float, and 64-bit fixed. Converters continue to evolve.

 

 

These are good things, and they actually do sound better. But from a system standpoint, there are still bounds outside the digital part that have yet to be improved. How many mics and preamp and power amplifiers and listening environment can take advantage of the dynamic range of a good 24-bit converter?

 

 

The "internet standard" of 128kbps MP3 has pretty much passed the torch to higher bit rates, like 256kbps, and both MP3 and ACC compression algorithms have improved dramatically in terms of quality.

 

 

I think this is a worth while "upgrade." The recorder I have in my hand at the moment will record 24-bit at 96 kHz sample rate, but now that I've gotten over the practice of making an audio CD from the recordings (which led me to record at 16-bit 44.1 kHz) and recognize that I throw away most of what I record with it after a short while, I'm recording at 320 kbps MP3 and never worry about running out of recording space.

 

Remember that when we first started using MP3 compression, it was to support music players that had only 64K (some less) of memory. People wanted to store more than 6 songs on their players, so they started using lower and lower bit rates. That's what gave MP3 a bad rap.

 

 

And, lossless compression algorithms that reconstruct the original signal instead of working by "data omission" are becoming common. People record real WAV files on the iPods instead of MP3s...and so on.

 

 

This is indeed a good thing for people who download music and listen to it on a good system, but most listening today is not only on less accurate playback systems, but in environments where we can't very well appreciate the better quality. It has to be good enough not to be annoying. You don't want a bit rate low enough so that the music sounds like it's being played under water, and you want low harmonic distortion because that's fatiguing.

 

But now we're going around in circles at the back end. To make players smaller and run for a longer time before recharging the battery, there's less and less power available to drive the headphones. So they're making more efficient headphones. So you have to replace your old headphones.

 

 

In analog, even low-cost pres are class A instead of AB. I did specs on the Audiobox 1818VSL - 8 mic pres for $500 - and while they obviously weren't as good as something like the Apollo, when your distortion products are sitting below -110dB and the noise is below -115dB, with ruler-flat response from 5Hz to 18kHz and crosstalk below -85dB, there's really not much to complain about.

 

 

Agreed, but I think that "Class A" has become a marketing term. 5534 op amps still sound really good when they're properly applied. The performance characteristics you describe above aren't a result of how the amplifiers are biased.

 

 

It's too bad DSD never caught on, though.

 

 

Or Beta VCRs, either.

 

I won't argue that technical quality isn't improving, but enough is enough. Now I want things that work better and are easier to use.

Link to comment
Share on other sites

  • Replies 142
  • Created
  • Last Reply
  • CMS Author

 

going back up. Remember when everyone recorded at 16 bits? I hardly know anyone who doesn't record at 24 bits these days.

 

 

Here's a different response to one I posted earlier:

 

"The bar" isn't the highest that's possible, it's what you strive to exceed. I think that "the bar" for audio production technology is already high enough for most of us. Whether you choose to record at 16-bit or 24-bit resolution, as long as you're using decent quality hardware that's less than about 5 years (technologically) old, only those who feel compelled to be overachievers (or those too lazy to properly set and monitor the record level) need 24-bit resolution.

 

When it comes to MP3 encoding, it really does get better, at least up to 256 kbps, maybe still better at 320 kbps for some, to use a higher bit rate, but with today's computers and memory capacity, there's no reason not to use at least 256 kbps for music distribution. Why set the bar any higher when we can all get to as good as it needs to be right now? The way to overachieve with perceptual encoding is to mix in a way that the encoder will have less to do, and how to do that isn't asked often enough. Maybe you should write an article on the subject (in fact I proposed it as a presentation topic in one of your mastering workshops for this year's PreSonuSphere).

 

At this stage in my career and life, I'd rather limbo under your bar than try to high jump over it.

Link to comment
Share on other sites

  • Moderators

In the studio, sometimes I'll record at 24 bit, but it's more on a whim than because it sounds better.

 

But you want people with their hearing intact to listen to your recordings too don't you? :) Sersiously? You don't think 24 bit sounds better than 16?

Link to comment
Share on other sites

  • CMS Author

But you want people with their hearing intact to listen to your recordings too don't you?
:)
Sersiously? You don't think 24 bit sounds better than 16?

 

Much of the time, no. Seriously. Can I tell the difference? Sometimes - it depends on the program material. If I'm recording a concert, which is mostly what i do, the ambient noise floor is well above the noise floor of a good 16-bit A/D converter. And a lower recorder noise floor is all that increasing the word length buys you.

 

Can I make 16-bit sound worse than 24-bit? Sure, by keeping the peak level less than -30 dBFS and boosting it by 30 dB, but why should I? I can set levels better than that. However if a customer (Customer: someone who's paying for my services) asks for a 24-bit recording, I'm happy to accommodate and I won't try to talk him out of it.

 

Back when I was recording on tape, there were times where recording at 30 ips sounded better than recording at 15 ips (both of which always sounded better than recording at 7-1/2 ips) but I usually recorded at 15 ips unless the customer asked for 2x speed. He's buying the tape. But with hard disk recording, it seems the customer almost never pays for disk space other than perhaps to bring in a drive for backups.

Link to comment
Share on other sites

  • Members

 

Why set the bar any higher when we can all get to as good as it needs to be right now?

 

 

I agree with this.

 

Maybe this is different from your reasoning, but the limiting factor on almost every recording that I have made for money has been the talent. You can go a long way not screwing up a good performance, but a technically pristine recording of mediocrity still sucks.

 

This fact of life has led me to believe that there aren't a lot of gains to be made in tech... the elements that require optimization are human.

Link to comment
Share on other sites

  • Members

 

I suppose that as the initial poster of "Game Changer", it would be important for me to elaborate. Consider the cost/function ratio of this product line. Those that would benefit most from it are places like churches and small coffee houses. These often have different people with little experience changing the settings on analog mixers. The switch they use most successfully is the "Suck Button". Imagine having a system that has the ability to have snapshots of different regular performers controlled by different sound men, all with consistant level, eq, compression and mix.

 

 

That's an interesting hypothesis... I believe that first one must imagine a coffeehouse / church / local-venue performer who brings a consistent sound and technique to every performance. Without a consistent performance, trying to depend upon presets seems like putting the cart before the horse. ;-)

 

 

The idea of game changer is just an expression to emphasize the significant Moore's Law changes in the audio field. The competition of Behringer is going to now be forced to equal or surpass the X32 and a new price war will ensue.

 

 

I think that everyone - Behringer's competitors included - understands that Behringer is at the low end of the price curve. Behringer's detractors (and some satisfied customers) tend to believe that they're getting exactly what they pay for. What would be a game changer is if this product (or some other product) garnered not only a mention for "bang for the buck", but also the respect and recommendation of key influencers in the MI industry. Only time will tell whether that can happen with the X32. A marketing blurb does not create the customer's long-term experience with a product.

Link to comment
Share on other sites

  • Moderators

 

That's an interesting hypothesis... I believe that first one must imagine a coffeehouse / church / local-venue performer who brings a consistent sound and technique to every performance. Without a consistent performance, trying to depend upon presets seems like putting the cart before the horse. ;-)

 

 

Pretty much everyone who does live sound night after night for the same artist discovers that recalling the preset from last night's show or just having the "analog preset" of the faders and EQ still set from last night gets you in the ballpark if there's no time to do a proper sound check. Of course you do have the room sound (which is a huge factor) but your drive rack or mains EQ helps with that. There's no "unreverb" box so you do have to tweak the 'verb settings from the previous show quite a bit to adjust to wherever in the continuum you are from metal shed with a concrete floor to outdoors.

 

Bottom line is it's generally better than starting from a zero'd out board if you're really short on time.

 

Crews that work for bands who tour a lot generally set up everything as identically as possible night after night. Sometimes all we get is a quick line check and the sound check is the first song with audience present. That goes for pro bands too.

 

Often mixing a big show is 45 seconds of sheer terror followed by 45 minutes of abject boredom. I think seeing "Behringer" written on the mixer would add to the terror at this point, not relieve it.

 

Terry D.

Link to comment
Share on other sites

  • Members

Most venues in my area, be it coffee houses, clubs, bars, have their own sound man. And many times the soundman gets a cut of the door. I can see the X32being good for venues with the same sound guy, day in day out. We have a venue in our area that host a "Battle of the Bands", once a month and for that, the X32 would be great. Soundcheck each band, save a preset with their name and your good to go next time around. Yes you will still need to tweak the mix, but its a great start.

If the X32 takes off, you will see others follow. Hell the other manufacturers probably have working prototypes as we type.

Link to comment
Share on other sites

  • CMS Author

Maybe this is different from your reasoning, but the limiting factor on almost every recording that I have made for money has been the talent. You can go a long way not screwing up a good performance, but a technically pristine recording of mediocrity still sucks.

 

Well, I try not to be obviously critical of my fellow musicians. ;)

 

What I think is significant, however, is that the recording process - from microphone to loudspeaker - is better than anything in between. For example, if I'm recording something quiet and I need a lot of gain in the mic preamp, I'm going to get hiss (which will be accurately recorded in all its glory). Using more bits isn't going to fix that, but using a better mic and/or preamp might.

 

Similarly, if you a simulator plug-in in lieu of a guitar amplifier and microphone, maybe a better plug-in would sound more like a real guitar and amplifier, or maybe the one you have is OK if you spend more time learning how to tweak it, and also learning what a guitar really sounds like. Same with drums, and any other virtual instrument.

Link to comment
Share on other sites

  • Moderators

...the elements that require optimization are human.

 

That is the best thing I've read today. That's great.

 

I'll tell you the sad truth, I got very good at pocketing drums bass and guitars and tuning vocals without artifacts for the simple reason that I couldn't bear bands sounding as bad as they do. It's not "right" what I did :), but I couldn't live with the reality with some bands. At this point I've lost all desire recording bands. They aren't any good any more. Yes...

 

"the elements that require optimization are human"

Link to comment
Share on other sites

  • Members

That is the best thing I've read today. That's great.


I'll tell you the sad truth, I got
very
good at pocketing drums bass and guitars and tuning vocals without artifacts for the simple reason that I couldn't bear bands sounding as bad as they do. It's not "right" what I did
:)
, but I couldn't live with the reality with some bands. At this point I've lost all desire recording bands. They aren't any good any more. Yes...


"the elements that require optimization are human"

 

Well, you're a better engineer to me: I gave up on it before getting to where I could make gridded drums and tuned performances sound passable.

 

My conclusion (and maybe this is flawed) was that if the drummer can't play in the pocket to begin with -- the sine qua non of a good performance -- then they probably are missing the finer points of the performance. Just because something is in tune and on time doesn't mean that it's a performance that I'd like to share with other people-- that's just the minimum requirement (assuming we're not talking about the kinds of imperfections that we like in performances....) and if that minimum isn't there, what is the point in moving forward?

Link to comment
Share on other sites

  • CMS Author

 


I
was
wondering if you were saying that firewire causes audible artifacts. It appears that was not your complaint, it is the reliability issue instead, fair enough.

 

 

The biggest reliability problem with Firewire, I thinki, is with the connectors themselves. It doesn't take very much to nudge one enough so that it loses contact, and then the connection needs to be reset. Nobody ever made a locking Firewire connector.

 

The other thing is that Windows (the operating system) doesn't support Firewire audio devices, so everybody who makes one for the PC has to supply a driver. You'll see a lot of discussion about the "quality" of the drivers. At the time I was keeping up with these things, about half a dozen years ago, there were three chipsets that were used in most of the Firewire audio devices - BridgeCo, TI, and TC Electronic. TC provided a pretty robust driver that most everyone who used their chip set built their support software around. BridgeCo had a kernel that took a lot of building around it, and I don't know what TI offered in the way of support to the audio hardware manufacturers. I don't know what chip set RME uses, but they seem to write all of their own software and are very good about fixing bugs and writing low latency code. Their stuff is also more expensive than just about anyone else's. Apogee gave up and is just supporting Apple OS now, and that's all Metric Halo ever did (both companies have very find Firewire performance).

 

As it stands today, some companies are keeping up with the latest operating systems, others aren''t. I have a Mackie 1200F that works fine under WinXP, but there's no driver and mixer application support for it under Win7, and there never will be since it's a "legacy" product. PreSonus comes up with a new installation package every few months which includes a driver, but mostly these new versions are enhancements to the software application which, of course isn't a bad thing unless they break the core driver in the process.

Link to comment
Share on other sites

  • Members

And I'll ask you, as I asked at the beginning of this thread, what game will it change, and how? You think every band is going to start using stereo in-ear monitors? Remember, there's more to an IEM system that just enough output buses to feed it.

 

Well, I read the quick start guide of the X32 and imo, it appears to offer a lot of bang for the buck. In that respect, I think this thing may very well turn out to be game changer for many bands and even budget studios.

 

I realize very well that there's more to IEM systems than just enough buses to feed it. Right now, I am touring with a band where half the band uses in-ear and the other half still uses wedges. 9 out of 10 times we can't bring our own sound engineer with us and there are 5 leadsingers in this band. Soundchecks can get really complicated that way, especially when the local engineer doesn't speak English :(

 

On a completely unrelated note: your name sounds somewhat familiar to me. Do you happen to be anywhere close to Nashville? If I'm not mistaking, we've had some interesting discussions on an audio bulletin board called "pro.rec" (or something like that) many, many years ago. I recall Brian Tankersley took part in those discussions, as well. If you're the same Mike, you once promised to buy me dinner the next time I would come to Nashville ;)

Link to comment
Share on other sites

  • CMS Author

On a completely unrelated note: your name sounds somewhat familiar to me. Do you happen to be anywhere close to Nashville? If I'm not mistaking, we've had some interesting discussions on an audio bulletin board called "pro.rec" (or something like that) many, many years ago. I recall Brian Tankersley took part in those discussions, as well. If you're the same Mike, you once promised to buy me dinner the next time I would come to Nashville
;)

 

No, I'm not in Nashville, but you might have been thinking of the rec.audio.pro newsgroup. I still check in there evert day and post now and then but there's very few of the old gang left there. Haven't seen anything of Brian in years since he made himself famous with the world's largest PARIS DAW. I wonder where that is now.

 

But you can buy me dinner if you come to the DC area. ;)

Link to comment
Share on other sites

  • Members

These are good things, and they actually do sound better. But from a system standpoint, there are still bounds outside the digital part that have yet to be improved. How many mics and preamp and power amplifiers and listening environment can take advantage of the dynamic range of a good 24-bit converter?

 

But I think Beck was commenting about the digital end of things.

 

I think that "Class A" has become a marketing term. 5534 op amps still sound really good when they're properly applied. The performance characteristics you describe above aren't a result of how the amplifiers are biased.

 

The distortion products are lower with Class A than Class AB, and most op amp outs are AB. I know the feedback reduces distortion, as does slight forward bias in discrete equivalents, and there are always tricks like adding a load resistor to one supply side to keep one leg turned on all the time. Still, Class A is a more elegant topology. I'd rather use something that inherently has no crossover distortion compared to tweaking something to minimize crossover distortion.

 

As to 5534 op amps I've used 'em for years, but I gotta say some of the newer Burr-Brown stuff sounds great (although I suppose they're economically prohibitive when you have a unit with 8 mic pres that you're trying to bring in for under $500 :)).

 

I won't argue that technical quality isn't improving, but enough is enough. Now I want things that work better and are easier to use.

 

No argument there. I was addressing Beck's point about the lowered bar.

Link to comment
Share on other sites

  • Members

 

Whether you choose to record at 16-bit or 24-bit resolution, as long as you're using decent quality hardware that's less than about 5 years (technologically) old, only those who feel compelled to be overachievers (or those too lazy to properly set and monitor the record level) need 24-bit resolution.

 

 

The first time I went from 16- to 20-bit converters I heard an immediate, obvious impirovement (not a "wine-tasting" one). Of course source material enters into it; some material doesn't have the dynamic range to matter. But by the time you've given up a bit for dithering and a bit for noise floor, you're at around -80dB for 16 bits and I definitely have analog gear that can do better. In that case, 16 bits is the bottleneck for the recording quality.

Link to comment
Share on other sites

  • Members

BTW I actually was able to lay hands on a Behringer X32 a couple weeks ago and check it out functionally, but not for audio quality. It is very impressive, and they did put some serious thought into making it easy to use. I only had about 10 minutes but it was a pretty impressive 10 minutes.

Link to comment
Share on other sites

  • Members

No, I'm not in Nashville, but you might have been thinking of the
rec.audio.pro newsgroup
.

 

Aha, yes, that was it! I used to spend a lot of time there a long time ago. Mid/late '90's IIRC.

 

I still check in there evert day and post now and then but there's very few of the old gang left there. Haven't seen anything of Brian in years since he made himself famous with the world's largest PARIS DAW. I wonder where that is now.

 

Haven't talked to Brian in ages, either. I wonder how he's doing these days.

 

But you can buy me dinner if you come to the DC area.
;)

 

LOL. I promise whenever I get in the DC area in the future, dinner is on me, Mike :)

Link to comment
Share on other sites

  • CMS Author

 

The first time I went from 16- to 20-bit converters I heard an immediate, obvious impirovement

 

 

Yeah, but when was that? About 1995? Back then you were hearing the difference between about 12 and 15 bits.

 

 

But by the time you've given up a bit for dithering and a bit for noise floor, you're at around -80dB for 16 bits and I definitely have analog gear that can do better. In that case, 16 bits is the bottleneck for the recording quality.

 

 

In theory, yes, but in practice, not always. You have to actually be able to use that A/D converter in order for the question (and answer) to be significant. Connect your favorite preamp to your favorite converter, turn the preamp gain up full, terminate the input with a 150 ohm resistor (it'll probably be putting out about -65 dBu of noise), and record what comes out of the converter both at 16 and 24 bits. How much difference to you measure? How much difference do you hear?

 

Remember that dynamic range and signal-to-noise ratio are not the same, Signal-to-noise ratio is measurable. There's a maximum output level and there's a noise level, and you just take the ratio of the two levels. Dynamic range, however, isn't so clear. You can measure the maximum output level but since we can extract information from signal that's somewhat below the noise level, defining the bottom end of dynamic range is more difficult.

 

Usually we do it theoretically, and that's where you get the numbers like 112 or 115 dB of dynamic range for a converter (the component). But when you measure the noise level of the converter (the box) you'll find it somewhere in the -75 to -85 dBFS for a modern 24-bit converter.

Link to comment
Share on other sites

  • CMS Author

 

BTW I actually was able to lay hands on a Behringer X32 a couple weeks ago and check it out functionally, but not for audio quality. It is very impressive, and they did put some serious thought into making it easy to use. I only had about 10 minutes but it was a pretty impressive 10 minutes.

 

 

I spent about half an hour with the X32 at the NAMM show. I agree that there's been a lot of thought put into its usability, but I still had to think about where to find things when I wanted to do something. I have that problem with a lot of new gear, though, most recently with a TASCAM DR-40 recorder.

Link to comment
Share on other sites

  • Members

The elephant is the room is "build quality". While I don't necessarily have Behringer-phobia, I have had enough hit-and-miss experiences with their products that I'm reluctant to jump right into purchasing one of these things. It will obviously be the heart and soul of any rig.

Link to comment
Share on other sites

  • Moderators

 

The elephant is the room is "build quality". While I don't necessarily have Behringer-phobia, I have had enough hit-and-miss experiences with their products that I'm reluctant to jump right into purchasing one of these things. It will obviously be the heart and soul of any rig.

 

 

Once you start thinking of their stuff like a bag of Bic disposable razors, it all gets simple: you just carry a spare. The price per use of Behringer gear is quite low, probably cheaper than any other manufacturer. If you assign a high negative value to failure during a show or recording session, then of course that utility curve gets thrown out the window.

 

Terry D.

Link to comment
Share on other sites

  • CMS Author

 

Once you start thinking of their stuff like a bag of Bic disposable razors, it all gets simple: you just carry a spare.

 

 

Maybe you're a bigger guy than me, or you have a bigger van, but to me, carrying a spare console isn't quite the same as carrying a spare razor. Carrying a spare Behringer is probably better than carrying a spare Yamaha, but then most people don't carry a spare Yamaha because they trust the one they use. You can't go into a purchase like this with the assumption that it will fail but a backup is cheap.

 

 

If you assign a high negative value to failure during a show or recording session, then of course that utility curve gets thrown out the window.

 

 

But isn't this nearly always the case? Do you want to be the one everyone's staring at for five minutes while you unplug everything from the console that failed and plug in your backup?

 

You also need to consider that the console may not be a single point of failure, either, so a problem is more difficult to diagnose. Suppose you're using the Cat5 snake and the stage box end fails? Swap out the console and you still have the problem. Did you bring a backup stage box as well?

 

And while it takes the argument in a totally different direction, total failure of an analog console is unusual. About the only thing that can take down the entire console is the power supply, but a digital console usually is binary - either it all works or it all doesn't. You can usually work around a noisy or dead channel but not a dead console.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...